text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Carbon and methane cycling in arsenic-contaminated aquifers Geogenic arsenic (As) contamination of groundwater is a health threat to millions of people worldwide, particularly in alluvial regions of South and Southeast Asia. Mitigation measures are often hindered by high heterogeneities in As concentrations, the cause(s) of which are elusive. Here we used a comprehensive suite of stable isotope analyses and hydrogeochemical parameters to shed light on the mechanisms in a typical high-As Holocene aquifer near Hanoi where groundwater is advected to a low-As Pleistocene aquifer. Carbon isotope signatures ( δ C-CH 4 , δ 13 C-DOC, δ C-DIC) provided evidence that fermentation, methanogenesis and methanotrophy are actively contributing to the As heterogeneity. Methanogenesis occurred concurrently where As levels are high ( > 200 μg/L) and DOC-enriched aquitard pore water infiltrates into the aquifer. Along the flowpath to the Holocene/Pleistocene aquifer transition, methane oxidation causes a strong shift in δ C-CH 4 from -87 ‰ to + 47 ‰ , indicating high reactivity. These findings demonstrate a previously overlooked role of methane cycling and DOC infiltration in high-As aquifers. © 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ) Introduction The health of tens of millions of people worldwide is affected by chronic exposure to arsenic-polluted groundwater resources ( WHO, 2011 ;Karagas et al., 2015 ;Podgorski and Berg, 2020 ). Diseases caused by this exposure are particularly common in the floodplains and deltas of the large East and South Asian river systems ( Smith et al., 20 0 0 ;Harvey et al., 20 02 ;McArthur et al., 2004 ;Berg et al., 2007 ;Fendorf et al., 2010 ;Zhang et al., 2017 ). There, the natural abundance of bioavailable natural organic matter (OM) in geologically young Holocene depositions often leads to hydrogeochemical conditions under which the reductive dissolution of As-bearing Fe(III) (oxyhydr)oxide minerals triggers As release and widespread groundwater As pollution ( Smedley and Kinniburgh, 2002 ;Islam et al., 2004 ;Stuckey et al., 2016 ). In contrast, older sediments of the Pleistocene period generally exhibit less reducing aquifer conditions, with As concentrations usually < 10 μg/L. A striking feature of As-affected aquifers is the high, localscale spatial variability of As concentrations ( van Geen et al., 20 06 ;Eiche et al., 20 08 ;Fendorf et al., 2010 ;Cozzarelli et al., 2016 ;Ziegler et al., 2017 ;Polya et al., 2019 ), which is thought to be linked to hydrogeological heterogeneities and to associated variations in dominating redox processes. Besides Fe(III) (oxyhydr)oxides and other redox-sensitive mineral phases, the abun- ; van Geen et al., 2013 ). B ) Satellite image of the study site (Google Earth). The white arrow indicates the general groundwater flow direction towards Hanoi ( van Geen et al., 2013 ). The coloured dots depict the locations of the studied groundwater wells (numbers in white, red dots for As > 100 μg/L, orange dots for As 10-100 μg/L, blue dots for As < 10 μg/L, well depths ranging from 18 to 53 m b.g.l.). Sampling locations for river and riverbank water (light blue), and for pond water (light green) are also shown. Figure adapted from Stopelli et al. Stopelli et al. (2020) . dance of dissolved sulphate (SO 4 2 − ) and the formation of Assulphide minerals has also been recognised to impact the mobility of As ( Bostick and Fendorf, 2003 ;Buschmann and Berg, 2009 ). Furthermore, the bioavailable OM in young riverbank deposits, ponds or channel infill plays an important role in creating hot spots of As mobilisation, while older sedimentary OM, which tends to be more recalcitrant, is of lower significance ( Harvey et al., 2002 ;McArthur et al., 2004 ;Rowland et al., 2007 ;Polizzotto et al., 2008 ;Neumann et al., 2010 ;Postma et al., 2012 ;Lawson et al., 2016 ;Kulkarni et al., 2017 ;Richards et al., 2019 ;Glodowska et al., 2020a ;Wallis et al., 2020 ). However, in that context the role of methane (CH 4 ) cycling has rarely been considered, especially in natural settings , although several studies detected CH 4 in As-contaminated aquifers ( Liu et al., 2009 ;Postma et al., 2012Postma et al., , 2016Sø et al., 2018 ) and indicated that elevated CH 4 and As concentrations might be related ( Buschmann and Berg, 2009 ;Sracek et al., 2018 ). On larger scales (100 m to kilometre) and based on the genesis and the resulting mineralogical characteristics of Pleistocene and Holocene sediments, high and low As aquifers often show highly contrasting redox conditions. They are found adjacent to each other, separated by redox transition zones (RTZ). Generally, the formation and stability of RTZ is linked to a variety of factors, including the interaction of transport processes, microbial activity and the stability of As host mineral phases, mainly Fe-bearing phases. Here, we present a first detailed isotope study of CH 4 and C cycling and its potential impact on As mobility at the metre to kilometre scale. Our field site at Van Phuc near Hanoi (Vietnam) has previously been the subject of a wide range of comprehensive studies that investigated geological, hydrochemical, microbial, lithological and anthropogenic characteristics that are common at many As-polluted aquifers in Asia Eiche et al., 2008 ;van Geen et al., 2013 ;Stahl et al., 2016 ;Eiche et al., 2017 ;Nghiem et al., 2020 ;Stopelli et al., 2020 ). The Van Phuc site features a relatively stable lateral groundwater flow induced by largescale groundwater pumping in nearby Hanoi van Geen et al., 2013 ) ( Fig. 1 A). Surface water from the Red River infiltrates through OM-rich riverbed sediments, where As is mobilised by reductive dissolution, before migrating through a Fe-reducing high-As Holocene aquifer and across a redox transition zone (RTZ) into an older, low-As aquifer of Pleistocene deposits ( Fig. 1 B). Both aquifers are capped by a silty aquitard of 15-22 m thickness that contains sandy lenses and intercalations of OM Eiche et al., 2008 ). The study site also hosts several eutrophic ponds that are used for aquaculture. We use a comprehensive suite of isotopic analyses ( δ 13 C-DOC, δ 13 C-CH 4 , δ 13 C-DIC, plus stable water isotopes δ 18 O and δ 2 H) to identify OM sources, as well as to elucidate processes of carbon cycling and the potential role of CH 4 cycling on As mobilisation. Experimental design Groundwater samples were collected in July 2018 and again in April 2019 in the village of Van Phuc, which is located along a Red River meander some 15 km southeast of Hanoi, Vietnam. Nine new monitoring wells were installed between the riverbank and the redox transition zone (RTZ) in December 2018, where previous hydrochemical data were scarce, and an additional well was recovered. Overall, a total of 29 groundwater samples (well depths 18 to 53 m b.g.l., 1-metre well screen) and nine surface water samples (Red River, riverbank pore water, ponds) were collected for analyses. Sample collection Groundwater samples were collected with an electric submersion pump after the stabilisation of pH, redox potential E h , oxygen and conductivity; these were measured using a daily calibrated portable multi-analyser (WTW 3630). The E h values were normalised to the standard hydrogen electrode (SHE). River and pond water samples were collected in a bucket. Riverbank pore water was sampled by pushing a hollow stainless-steel rod (1 cm I.D.) some 25-30 cm into the sediment. The hollow rod was screened at the tip and pore water was drawn with a syringe through a tube from the inner part of the rod ( Stahl et al., 2016 ;Stopelli et al., 2020 ). The field parameters (O 2 , pH, E h , conductivity, temperature) of the samples were immediately measured. Alkalinity was determined directly in the field by titration as HCO 3 − alkalinity (Merck Mcolortest 11,109 test kit) and expressed as mg C/L. Because alkalinity may not solely depend on dissolved HCO 3 − , dissolved inorganic carbon DIC concentrations were calculated with PHREEQC (v3.4.0) ( Parkhurst and Appelo, 2013 ) based on the measured hydrochemical parameters. All values lower than the Limits of Quantification (LOQ) of each method were set as half of the LOQ for visualisation in the graphs. Sample aliquots for hydrochemical analyses were all filtered through 0.45-μm cellulose acetate into prerinsed polypropylene bottles; three aliquots also underwent acidification to pH < 2 to improve their stability (one for metals and trace elements, NH 4 + and PO 4 3 − acidified with 1% v/v HNO 3 , one for As(III) acidified with 1% v/v HNO 3 after passing the sample through an As(III)/As(V) separation cartridge (MetalSoft), and one for DOC acidified with 1% v/v HCl). Immediately after collection, all aliquots were stored at 4 °C and protected from light until the analyses. Detailed descriptions of the sampling technique and quality control for the hydrochemical parameters are available in Stopelli et al. Stopelli et al. (2020) . Hydrochemical analyses Major cations and trace elements were determined by Inductively Coupled Plasma Mass Spectrometry (ICP-MS, Agilent 7500 and 8900), dissolved nitrogen (DN) and dissolved organic carbon (DOC, which does not include dissolved methane) with a total N and C analyser (Shimadzu TOC-L CSH), NH 4 + and ortho-PO 4 3 − with photometry using the indophenol and molybdate methods, respectively, and anions by ion chromatography (Metrohm 761 Compact IC). Stable isotope analyses The water δ 2 H and δ 18 O ratios were determined from 8-mL samples collected in amber glass vials without headspace and were analysed using a cavity ringdown spectrometer (Picarro). The replicate standard and sample measurements indicated reproducibility within 0.5 ‰ for δ 18 O and 3 ‰ for δ 2 H. The δ 2 H and δ 18 O ratios were normalised to Vienna Standard Mean Ocean Water (VSMOW). The samples for total methane determination were collected directly from the pumping tube by inserting a needle into 5-mL evacuated glass vials (Labco819W). A headspace of half to twothirds of the total volume was left, and the vials were immediately frozen on dry ice in an upside-down position to trap the gas phase above the frozen water in the vial headspace and to ensure sample stabilisation. Methane concentrations were analysed after complete sample thawing by gas chromatography (Shimadzu GC-2014) via the headspace equilibration method following the procedure reported in Sø et al. Sø et al. (2018) . Samples for δ 13 C-CH 4 were collected in April 2019 in 120-mL serum bottles, filled anoxically by water overflow, poisoned with 20 mg Cu(I)Cl, sealed with butyl-rubber thick stoppers and aluminium crimps and preserved at + 4 °C until analysis. In the laboratory, a 20 mL nitrogen (N 2 ) headspace was inserted for overnight equilibration. First, the samples were injected and concentrated with a series of traps in a trace gas unit (T/GAS PRECON, Micromass UK Ltd). The purified gas was then analysed with an isotope ratio mass spectrometer (IRMS; GV Instruments, Isoprime). The replicate sample and standard measurements were reproducible within 2 ‰ . The δ 13 C-CH 4 data are normalised to the Vienna Pee Dee Belemnite (VPDB) reference standard. Samples for δ 13 C-DIC were collected in July 2018, filtered through pre-combusted 0.7-μm glass fibre mesh into 40-mL glass amber vials with black butyl septa, and stored at + 4 °C until analysed. Samples for δ 13 C-DOC were filtered through 0.7-μm glass fi-bres mesh into 40-mL glass amber vials with white silicon-Teflon septa and acidified to pH < 2 with HCl, analytical grade. Samples for δ 13 C-DOC were collected in July 2018 and in April 2019 for the newly installed wells. Groundwater from six wells was collected both times to check for the comparability of isotopic values between the different sam pling cam paigns (double values in Table S1). Isotopic ratios were determined with an elemental analyser coupled to an isotope ratio mass spectrometer (EA-IRMS, EA Thermo Flash 20 0 0 and IRMS Thermo Delta V) at the Stable Isotope Ecology Laboratory, University of Georgia. Replicate sample measurements resulted in an analytical reproducibility of 0.5 ‰ , while repeated sampling of the same well in two different field campaigns yielded a slightly higher variability of up to 1 ‰ for DOC and 2 ‰ for DIC. The δ 13 C-DIC and δ 13 C-DOC values are normalised to the Vienna Pee Dee Belemnite (VPDB) reference standard. Samples for δ 15 N-NH 4 + were collected in July 2018, filtered through a 0.45-μm cellulose acetate filter into 250-and 500-mL propylene bottles and immediately frozen at −20 °C until analysed. In the laboratory, N was concentrated on filters following the ammonia diffusion method as described in Holmes et al. ( Holmes et al., 1998 ), and the filters were further analysed via EA-IRMS at the Stable Isotope Ecology Laboratory, University of Georgia. The replicate standard measurements indicated an analytical reproducibility within 1 ‰ . The δ 15 N-NH 4 + values are normalised to air reference. Statistical analyses Statistical tests and probability calculations were carried out using the PAST software, version 3.17 ( Hammer et al., 2001 ). Hydrology and hydrogeochemical evolution of As contamination A comprehensive set of stable isotope signatures and hydrochemical parameters (see Methods) was determined from groundwater and surface water samples collected along a 2-km long transect that follows the average groundwater flow direction towards Hanoi, as inferred from previous hydrological, hydrochemical and numerical studies ( Fig. 1 ) ( van Geen et al., 2013 ;Stopelli et al., 2020 ;Wallis et al., 2020 ). River water infiltration is the main source of groundwater recharge at the study site with young, OM-and Fe(III)-rich riverbank sediments ( Fig. 2 ) creating a hot spot of arsenic release at the Red River -groundwater interface ( Stahl et al., 2016 ;Stopelli et al., 2020 ;Wallis et al., 2020 ). Dissolved As is then advected into and within the Holocene aquifer where concentrations initially range between 20 and 200 μg/L ( Fig. 2 , wells 2 to 5), hereafter indicated as flowpath 1. Further along the groundwater flow direction, dissolved As reaches up to 540 μg/L in the Holocene aquifer ( Fig. 2 , wells 6 to 8). The highest dissolved As concentrations are found in the wells that also show the largest concentrations of dissolved organic carbon (DOC), ammonium (NH 4 + ) and dissolved methane (CH 4 ), that is 5-7 mg DOC/L, 50-65 mg NH 4 -N/L and 40-58 mg CH 4 /L, respectively ( Fig. 2 and Fig. S6 for NH 4 + ). The water isotope signatures in these wells ( δ 18 O −6 ± 1 ‰ ) are indicative of evaporative water that locally infiltrates from the aquitard into the aquifer ( Fig. 2 ), rather than of water originating from Red River bank infiltration. Sediment coring revealed substantial lithological heterogeneity in the clayey aquitard deposits including sandy intercalations, caused by alternating riverine and marine depositions during the Holocene period ( Eiche et al., 2008( Eiche et al., , 2017Trung et al., 2020 ;Kontny et al., 2021 ). It was also shown that aquitard depositions contain up to 9 wt.% of sedimentary OM ( Eiche et al., 2008 ;Glodowska et al., 2020a ). The DOC-enriched pore water (5-7 mg C/L, see Fig. 2 ) that evolves as it percolates through the sandy intercalations, creates locally highly reducing conditions as it egresses into the underlying aquifer, hereafter indicated as flowpath 2. Our mixing calculations based on the water isotope ratios ( δ 18 O and δ 2 H) suggest that groundwater collected just below the aquitard (i.e., wells 6a and 8a in Fig. 2 ) consist of up to 92% aquitard pore water (see supplementary section SI.1 for calculations along with aquitard pore water compositions). At the RTZ that marks the interface between the Holocene and the Pleistocene aquifers, both dissolved As and Fe concentrations sharply decrease ( Fig. 2 , wells 9 to 13, and Fig. S2) while Mn concentrations increase (Fig. S2). High-resolution mineralogical analyses of a sediment core that was drilled through the RTZ, demonstrate the presence of newly formed As-bearing mixed-valent Fe oxides ( Kontny et al., 2021 ). This finding suggests that Fe 2 + is advected from the Holocene aquifer into the RTZ, where it induces the transformation of Fe(III)(oxyhydr)oxides into a sequence of Fe(II) or mixed-valence Fe(II/III) phases (siderite, pyrite, goethite and haematite coatings, and magnetite; Kontny et al., 2021 ), accompanied by net As sorption and incorporation. Further along the groundwater flowpaths into the Pleistocene aquifer, As sorption onto the abundant Fe(oxyhydr)oxides of the Pleistocene sands attenuates the As concentrations to below 5 μg/L ( Eiche et al., 2008 ;Rathi et al., 2017 ;Neidhardt et al., 2018 ) ( Fig. 2 , wells 14). Sources of dissolved organic matter The isotopic signatures of dissolved organic carbon ( δ 13 C-DOC, Eiche et al., 2017 ). However, the aquifer's average TOC content is very low ( < 0.03 wt.% ( Eiche et al., 2008 )) and therefore unlikely to provide a sizeable contribution to driving biogeochemical transformations. This is in agreement with a recent reactive transport modelling analysis of the Van Phuc site that suggested negligible OM reactivity in the Holocene aquifer sands, while identifying the riverbank-groundwater interface as the dominant hotspot for OM turnover and associated As release ( Wallis et al., 2020 ). Overall, taking into account that (i) the highest DOC (5-7 mg/L) concentrations are present below the aquitard/Holocene aquifer hydraulic connections and (ii) the water isotopes indicate pore water infiltration from the aquitard, we conclude that the elevated DOC concentrations and the corresponding δ 13 C-DOC signatures in parts of the Holocene aquifer ( Fig. 2 and Fig. 3 A, wells 4, 6a, 8a) are indicative of OM infiltrating from the aquitard. Fermentation of dissolved organic carbon Compared to the aquitard sedimentary δ 13 C-TOC signatures of −20 ‰ to −27 ‰ at the site ( Eiche et al., 2008 ) (flowpath 2) and to the δ 13 C-DOC signatures of −27.5 ± 1 ‰ to −28 ±1 ‰ in riverbank porewater (flowpath 1), the δ 13 C-DOC values in the studied groundwaters ( −28 ±1 ‰ to −31 ±1 ‰ ) were slightly lower ( Fig. 3 A). This observation is consistent with the occurrence of anaerobic fermentation of OM to small molecules including, e.g., propionate Fig. 3. Plots of carbon species and isotope signatures in the Holocene aquifer, at the redox transition and in the Pleistocene aquifer. A ) δ 13 C-DOC and DOC conc.; B ) δ 13 C-DIC and DIC conc.; C ) δ 13 C-CH 4 and CH 4 conc., where the dot size is proportional to the corresponding As concentration. The data points are numbered by the progressive distance from the river and a letter for increasing depth in cases of nested multilevel wells (see Table S1 for original data). The flat rectangles indicate groundwater from just below the aquitard/aquifer connection. The brown shading in panel a) represents δ 13 C-TOC values of OM in aquitard sediment cores drilled between the monitoring wells 6 and 8 ( Eiche et al., 2017 ), and of DOC concentrations in aquitard pore water extracted from cores at 16-18 m depth at well 6 (Supplementary material section SI.1). Whiskers represent standard deviation for each parameter, resulting from analytical uncertainty. and acetate, a process that is characterised by a generally very small isotopic enrichment or even some depletion ( Botsch and Conrad, 2011 ;Conrad et al., 2014 ). Moreover, our recent microbiological companion study identified active and abundant bacterial communities capable of fermentative metabolism in all groundwater samples. Fermentative metabolisms generally produce CO 2 , H 2 and a broad range of short chain C compounds from larger organic molecules and hence transform rather recalcitrant OM to more bioavailable compounds ( McMahon and Chapelle, 1991 ;Chapelle, 20 0 0 ). Subsequently, fermentation products may serve as substrates for both methanogenesis and the microbial reduction of Fe(III) (oxyhydr)oxides ( Glodowska et al., 2020a ). DIC-isotopes indicative of methanogenesis and methanotrophy Dissolved inorganic carbon (DIC) concentrations successively rise along the groundwater flowpath 1 from 18 mg C/L in the Red River to 23-28 mg C/L in riverbank pore water, and generally reach 10 0-20 0 mg C/L in the Holocene aquifer after a few hundred meters ( Fig. 2 and 3 B). This increase in DIC is attributed to both the dissolution of carbonate minerals and the high biogeochemical turnover of the freshly deposited OM in the riverbank sediments ( Stahl et al., 2016 ;Wallis et al., 2020 ), likely followed by much more slowly progressing fermentation reactions within the Holocene aquifer ( Stopelli et al., 2020 ;Glodowska et al., 2021 ). Further along groundwater flowpaths 1 and 2, the δ 13 C-DIC values increase from −9 ± 2 ‰ to + 3 ± 2 ‰ with DIC becoming enriched in 13 C in the Holocene aquifer ( Fig. 2 and Fig. 3 B). In contrast, while progressing towards the RTZ and the Pleistocene aquifer, the decrease in δ 13 C-DIC values to −17 ±2 ‰ indicates a preferential enrichment of 12 C-DIC in groundwater prior to a final increase to background values between −7 ± 2 ‰ and −13 ±2 ‰ in the Pleistocene wells most distant from the riverbank ( Fig. 3 B). These contrasting δ 13 C-DIC isotopic shifts are likely caused by the succession of methanogenesis where CO 2 is consumed, followed by methanotrophy where CO 2 is produced, respectively ( Murphy et al., 1989 ;Campeau et al., 2017 ). Methane cycling Along the groundwater flowpaths, CH 4 locally reaches very high concentrations of up to 58 mg/L, especially below the aquitard/aquifer hydraulic connections ( Fig. 2 ). We suggest that the Fig. 4. Conceptual model of organic matter sources and cycling in As-contaminated aquifers. River water enriched with DOC infiltrates into the aquifer from riverbank sediments, promoting reducing processes such as fermentation and Fe(III) reduction. At the aquitard/aquifer hydraulic connections, additional DOC-enriched aquitard pore water infiltrates into the aquifer, thereby promoting methanogenesis. At the transition between the Holocene (grey) and the Pleistocene aquifers (orange), methanotrophy occurs while the intruding redox conditions dissolve Fe and As from the Pleistocene sands that consequently turn grey. However, net As immobilisation dominates, as a product of biotic and abiotic reactions involving Fe. An accelerated advection induced by large-scale groundwater extraction from Pleistocene aquifers can exacerbate the advection of reducing conditions and hence promote As mobilisation in previously low-As aquifers. production of CH 4 in the Holocene aquifer is attributed to a twostep process involving (i) a substrate-producing fermentation step, and (ii) methanogenesis ( Conrad et al., 2014 ;Glodowska et al., 2021 ). Furthermore, the decrease of methane concentrations across the RTZ and the enrichment in the DIC-isotopy is a strong indication of methanotrophy. The indicated methane cycling is corroborated by a recent microbial community analysis that showed the presence of both methanogenic and methanotrophic microorganisms in the Holocene aquifer and at the RTZ (i.e., Methyloparacocci, Methylomonaceae, Candidatus Methanoperdens ) ( Glodowska et al., 2020b. The carbon isotope signatures of CH 4 ( δ 13 C-CH 4 ) span a remarkably wide range from −87 ±2 ‰ to + 47 ±2 ‰ ( 134 ‰ ). In the Holocene aquifer, the low δ 13 C-CH 4 signatures between −87 ±2 ‰ and −75 ±2 ‰ ( Fig. 3 C, grey dots) are typical for hydrogenotrophic methanogenesis, where the isotopically lighter 12 CO 2 is kinetically favoured over 13 CO 2 during transformation to CH 4 ( Whiticar, 1999 ;Liu et al., 2009 ;Conrad et al., 2014 ;Campeau et al., 2017 ). Accordingly, the preferred consumption of 12 CO 2 is reflected in the simultaneous increase of δ 13 C in the bulk DIC ( Fig. 3 B and Fig. S3). Note that the isotopic shift of DIC is much lower than that of CH 4 because CO 2 has a high background concentration in the form of HCO 3 − . The δ 13 C-CH 4 signatures in three wells located within the Holocene aquifer along flowpath 2 ( Fig. 4 ) are somewhat less negative ( −76 ±2 ‰ , wells 6a, 8a and 9a in Fig. 3 C), while the CH 4 concentrations are 40-58 mg/L, which is close to or even above saturation at the given temperature and hydraulic pressure conditions (saturation approx. 43 to 63 mg/L for wells screened at 20 m and 30 m, respectively, considering an average water table level of 8 m b.g.l.). Therefore, and based on noble gas investigations in groundwater of these wells, we hypothesise that an interstitial free CH 4gas phase has formed, which now occupies part of the pore space , accompanied by a slight isotopic enrichment of CH 4 in groundwater, as the lighter 12 CH 4 diffuses more easily into the gas phase than 13 CH 4 ( Xia and Tang, 2012 ). Further along flowpath 2 into the RTZ and in the Pleistocene aquifer ( Fig. 3 C, yellow and brown dots), dissolved CH 4 decreases sharply to below 0.5 mg/L (wells 12 and 13a), while δ 13 C-CH 4 values increase from −87 ±2 ‰ up to + 47 ±2 ‰ ( + 134 ‰ ). This is a strong indication of methane oxidation by methanotrophy, where the preferred consumption of the isotopically lighter 12 CH 4 causes this remarkable isotopic enrichment. In turn, the CO 2 produced from methane oxidation is likely causing the observed decrease in δ 13 C-DIC ( Fig. 3 B and Fig. S3). Finally, the resulting rise in HCO 3 − has most likely caused siderite (FeCO 3 ) oversaturation and precipitation, explaining the presence of the latter within the RTZ sediments ( Kontny et al., 2021 ). This hypothesised precipitation of carbonates can also explain the lower δ 13 C-DIC signatures observed within the RTZ as a result of the preferential precipitation of 13 Ccarbonate. Conclusions As conceptually illustrated in Fig. 4 , our hydrochemical and stable carbon isotope analyses suggest that fermentation, methanogenesis and methanotrophy can significantly affect carbon cycling in high-As aquifers. In previous analyses of As-contaminated sites, fermentative processes have largely been overlooked, even though these processes transform OM to small reactive molecules that can be utilized for methanogenesis and supply additional electron donating capacity for Fe(III) reduction. Therefore, fermentation likely has important implications for the mobilisation and subsequent fate of As. Our study demonstrates that OM-rich pore waters infiltrating from aquitard sediments are accompanied by particularly high As concentrations in the aquifer, consistent with recent findings from other As-contaminated sites ( Erban et al., 2013 ;Mihajlov et al., 2020 ). While slowly percolating through the aquitard, pore waters become enriched in DOC and NH 4 + by the decomposition of sedimentary OM. During this percolation and/or upon egress into the aquifer, the DOC can locally promote methanogenesis and Fe(III) reduction, and thereby result in substantial As mobilisation in the affected parts of the aquifers ( Fig. 4 ). Hence, such highly reducing, methanogenic zones likely contribute to the widely observed heterogeneity in groundwater As concentrations. At these locations, oversaturation of methane can lead to the formation of interstitial gas bubbles , which we hypothesize to locally obstruct the groundwater flow, as reported previously for petroleum contaminated aquifers ( Amos et al., 2011 ), as well as for sites affected by denitrification and N 2 gas formation ( Ryan et al.,20 0 0 ). At our study site the overall effect of slow transport processes, and accordingly sufficiently long reaction times for the development of highly reducing conditions, is reflected in particularly high As concentrations > 200 μg/L (i.e., > 20 times the WHO guideline value) that are predominantly found under methanogenic conditions. Furthermore, our new data provide for the first time fieldbased hydrochemical and isotopic evidence for methanotrophy to occur where CH 4 -enriched groundwater infiltrating from the Holocene aquifer comes in contact with the abundant Fe(III) (oxyhydr)oxides contained in the Pleistocene sediments ( Fig. 4 , flowpath 2). This field observation is consistent with recent results of laboratory incubation experiments with Pleistocene sediments from the RTZ at our field site, which demonstrated the occurrence of CH 4 oxidation coupled with Fe(III) reduction and As mobilisation ( Glodowska et al., 2020b ). Interestingly, this finding is also in line with investigations carried out at a crude oil-contaminated site in Minnesota (USA), where methanogenesis coupled to iron reduction in the anaerobic core of the plume was shown to be an important process ( Amos et al., 2011( Amos et al., , 2012. Consequently, the As retardation capacity, previously attributed to the high sorption capacity of RTZ sediments, specifically that associated with Fe(II/III) and Fe(III)-minerals ( Eiche et al., 2008 ;Rathi et al., 2017 ;Neidhardt et al., 2018 ;Kontny et al., 2021 ), might be reduced under methanotrophic conditions. Here we showed how methane cycling contributes to the patchiness of redox conditions in aquifers and hence to the variability in groundwater of As concentrations, particularly in aquifer zones characterized by slow advection (flowpath 2, Fig. 4 ). In contrast, in the more permeable aquifer sections where higher flow velocities persist (flowpath 1, Fig. 4 ), methane cycling plays a minor role and As concentrations remain lower. Nevertheless, overall mass fluxes of As will be largely controlled by these more permeable zones. Eventually, both regimes (high As conc./small flux vs lower As conc./high flux) need to be understood and considered in any risk assessment as well as in the development of groundwater management and remediation strategies. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
6,937
2021-05-27T00:00:00.000
[ "Environmental Science", "Geology" ]
Fractional inequalities of the Hermite–Hadamard type for m-polynomial convex and harmonically convex functions Eze R. Nwaeze1, Muhammad Adil Khan2, Ali Ahmadian3,∗, Mohammad Nazir Ahmad3 and Ahmad Kamil Mahmood4 1 Department of Mathematics and Computer Science, Alabama State University, Montgomery, AL 36101, USA 2 Department of Mathematics, University of Peshawar, Peshawar, Pakistan 3 Institute of IR 4.0, The National University of Malaysia, 43600 Bangi, Selangor, Malaysia 4 High Performance Computing Centre, CISD, Universiti Teknologi Petronas, Seri Iskandar, Perak, Malaysia Introduction The sets T and S ⊆ R \ {0} are called convex and harmonically convex, respectively if            ςq + (1 − ς)z ∈ T for all q, z ∈ T and ς ∈ [0, 1]; qz ςq+(1−ς)z ∈ S for all q, z ∈ S and ς ∈ [0, 1]. Whenever used, we shall always consider T as a convex set and S as a harmonically convex set. Let m ∈ N. Recall that a function ϕ : T → R is said to be m-polynomial convex [31] on T if for all q, z ∈ S and ς ∈ [0, 1]. For this class of functions, Toplu et al. established the following double inequality of the Hermite-Hadamard type. Theorem 1 ( [31] ). Let ϕ : T → R be an m-polynomial convex function. If ξ, δ ∈ T with ξ < δ, and ϕ is Lebesgue integrable on [ξ, δ], then the following Hermite-Hadamard type inequality holds: (1.1) The inequality (1.1) boils down to the classical Hermite-Hadamard inequality for convex functions if we take m = 1. Recently, Awan et al. [2] introduced the notion of m-polynomial harmonically convex functions as follows: a real valued function ϕ : S → R + : for all q, z ∈ S and ς ∈ [0, 1]. In the same paper, the authors established the following Hermite-Hadamard type inequality for this class of functions: Theorem 2 ( [2] ). Let ϕ : S → R + be an m-polynomial harmonically convex function. If ξ, δ ∈ S with 0 < ξ < δ, and ϕ is Lebesgue integrable on [ξ, δ], then the following Hermite-Hadamard type inequality holds: In the sequel, we will denote the sets of all m-polynomial convex and m-polynomial harmonically convex functions from A into B by XP m (A, B) and HXP m (A, B), respectively. The classical Hermite-Hadamard inequality has generated load of generalizations and extensions to other class of convexity. There are dozens of articles in this direction. We invite the interested reader to see the following articles [3-6, 8, 10-20, 22-30, 32-34] and the references cited therein. Now, recall that the left-and right-sided ζ-Riemann-Liouville fractional integral operators ζ J ξ + and ζ J δ − of order > 0, for a real valued continuous function ϕ(r), are defined as ( [21]): where ζ > 0, and Γ ζ is the ζ-gamma function given by with the properties Γ ζ (r + ζ) = rΓ ζ (r) and Γ ζ (ζ) = 1. If ζ = 1, we simply write The beta function B is defined by Another fractional integral operators of interest is the Caputo-Fabrizio operators [1]: let L 2 (ξ, δ) be the space of square integrable functions on the interval (ξ, δ) and H 1 (ξ, δ) := g | g ∈ L 2 (ξ, δ) and g ∈ L 2 (ξ, δ) . Since the classes of convexity introduced here are new, much work have not been done in this sense. This work is geared towards further development around inequalities for these classes. In view of this, we aim to achieve the following objectives: 1. To establish new Hermite-Hadamard type inequalities for the class of m-polynomial convex functions involving the Caputo-Fabrizio integral operators. Our first result in this direction generalizes and extends Theorem 3. 2. To obtain inequalities of the Hermite-Hadamard type for functions that are m-polynomial harmonically convex functions via the ζ-Riemann-Liouville fractional integral operators. This, in turn, also complement and generalize some existing results in the literature. Inequalities for m-polynomial convex functions Inequalities of the Hermite-Hadamard type, for m-polynomial convex functions, are hereby presented. The results, presented herein, involve the Caputo-Fabrizio operators. The required result follows. Conclusion Utilizing the Caputo-Fabrizio and generalized Riemann-Liouville fractional integral operators, we proved some inequalities of the Hermite-Hadamard kinds for m-polynomial convex and harmonically convex functions. Our results generalize, extend and complement results in [7,9,31].
1,038.8
2021-01-01T00:00:00.000
[ "Mathematics" ]
S100A9 gene silencing inhibits the release of pro‐inflammatory cytokines by blocking the IL‐17 signalling pathway in mice with acute pancreatitis Abstract The study aimed to investigate whether S100A9 gene silencing mediating the IL‐17 pathway affected the release of pro‐inflammatory cytokines in acute pancreatitis (AP). Kunming mice were assigned to the normal, AP, AP + negative control (NC), AP + shRNA, AP + IgG and AP + anti IL‐17 groups. ELISA was applied to measure expressions of AMY, LDH, CRP, TNF‐α, IL‐6 and IL‐8. The cells were distributed into the control, blank, NC, shRNA1 and shRNA2 groups. MTT assay, flow cytometry, RT‐qPCR and Western blotting were used to evaluate cell proliferation, cell cycle and apoptosis, and expressions of S100A9, TLR4, RAGE, IL‐17, HMGB1 and S100A12 in tissues and cells. Compared with the normal group, the AP group displayed increased expressions of AMY, LDH, CRP, TNFα, IL‐6, IL‐8, S100A9, TLR4, RAGE, IL‐17, HMGB1 and S100A12. The AP + shRNA and AP + anti IL‐17 groups exhibited an opposite trend. The in vivo results: Compare with the control group, the blank, NC, shRNA1 and shRNA2 groups demonstrated increased expressions of S100A9, TLR4, RAGE, IL‐17, HMGB1 and S100A12, as well as cell apoptosis and cells at the G1 phase, with reduced proliferation. Compared with the blank and NC groups, the shRNA1 and shRNA2 groups had declined expressions of S100A9, TLR4, RAGE, IL‐17, HMGB1 and S100A12, as well as cell apoptosis and cells at the G1 phase, with elevated proliferation. The results indicated that S100A9 gene silencing suppressed the release of pro‐inflammatory cytokines through blocking of the IL‐17 pathway in AP. hypercholesterolaemia, iatrogenic procedures and other idiopathic causes. 4 It is estimated that approximately 30% of all AP patients will be subject to severe attacks, which is indicative of a high mortality rate. 5 Owing to both the high mortality rate and exorbitant medical costs associated with the treatment of the more severe cases of AP, treatment of AP remains a critical challenge to the field of gastroenterology. 6 Schenckenburger et al demonstrated the roles of S100 calcium-binding protein A9 (S100A9) in inflammatory cell infiltration and in cell-cell contact regulation. 7 S100A9, which is commonly referred to as myeloid-related protein-14 (MRP14), is a primary member of the S100 family of proteins and has been linked to acute and chronic inflammatory conditions. 8 Furthermore, elevated levels of S100A8/A9 have been detected in a variety of inflammatory diseases, such as rheumatoid arthritis and inflammatory bowel disease. 9 During this study, we aimed to elucidate the mechanisms involved with S100A9 and its role in AP. When combined with S100A8, S100A9 constitutes the heterodimeric protein calprotectin (S100A8/9), which is expressed in nearly all cells, tissues and fluids in the human body. 10 A recent study explored the relationship between pancreatic cancer, S100A9/A8 and transforming growth factor beta 1 (TGFb1) concluded that the overexpression of S100A9/A8 by infiltrating inflammatory cells and the expressions is related to TGFb1 in pancreatic ductal adenocarcinoma (PDAC). 11 Interleukin-17 (IL-17), a pro-inflammatory cytokine mainly produced by T-helper 17 (Th 17) cells, has been reported to play a crucial role in the development of an effective immune response. 12, 13 Liu et al reported that IL-17 played a pivotal role in the pathogenesis of numerous inflammatory diseases in the central nervous system (CNS), such as multiple sclerosis and stoke, 14 whereas Dai et al suggested that serum IL-17 was an early prognostic biomarker of severe acute pancreatitis in patients receiving continuous blood purification. 15 However, few studies have appeared to place an emphasis on the effects of S100A9 and the release of proinflammatory cytokines through the IL-17 signalling pathway in AP. Hence, during this study, we aimed to explore the roles of S100A9 in the release of pro-inflammatory cytokines via the IL-17 signalling pathway in a mouse model of AP. | Ethics statement All animal use and experimental procedures were performed in accordance with the Declaration of Helsinki, 16 | Establishment of AP mouse model A total of 90 healthy male Kunming (KM) mice were raised under a specific pathogen animal (SPF) environment (23°C room temperature, 65% relative humidity and 12/12 hours light/dark cycle), with free access to water and food deprivation a minimum of 12 hours. The mice were then divided into 6 groups (15 mice each group), namely a normal (intraperitoneally injected with the same volume of sterile normal saline 6 times, once/h), a AP group (intraperitoneally injected with 20% L-arginine [L-Arg] [200 mg/100 g] [S3174, Selleck Chemicals Co. Ltd., Shanghai, China] 6 times, once/h), a AP + negative control (NC) group (injected with 200 lL 5 9 10 9 TU/mL shRNA-NC lentivirus by tail vein before intraperitoneal injection with 20% L-Arg), a AP + shRNA group (injected with 200 lL 5 9 10 9 TU/mL shRNA-S100A9 lentivirus by tail vein before intraperitoneal injection with 20% L-Arg), a AP + IgG group ( 17 When the mice exhibited a reduction in foraging activity, a tendency to huddle, loose fur, distended abdomens and frequent urination, post-model establishment, the model was then considered to be successful. 18 Twenty-four hours post-model establishment, a tail bleeding procedure was performed. The mice were executed, and the serum was separated and stored in a refrigerator at À20°C. One part of the extracted pancreatic tissues was fixed, embedded and sectioned for immunohistochemistry (IHC) and haematoxylin-eosin (HE) staining, whereas the other part was used for reverse transcription quantitative polymerase chain reaction (RT-qPCR) and Western blotting purposes. | Immunohistochemistry (IHC) Immunohistochemistry was performed in accordance with the instructions of the SP-9001 Kit (Beijing Nobleryder Technology Co. Ltd., Beijing, China). The paraffin-embedded pancreatic tissue blocks obtained from the mice of the normal and AP groups were placed at room temperature for 30 minutes. The tissues were then fixed with acetone at 4°C for 10 minutes, dewaxed and rehydrated. After the tissues were washed 3 times with phosphate-buffered saline (PBS) (5 minutes per wash), 3% H 2 O 2 was used to exhaust the endogenous peroxidase activity for 5-10 minutes. The blocks were then washed 3 times with distilled water (3 minutes per wash) and immersed twice in PBS (5 minutes each time). After that, the tissues were blocked finally in a working solution comprised of 5% normal goat serum (C1771, Beijing Applygen Technology Co., Ltd, Beijing, China). After incubation at 37°C for 10-15 minutes, the tissue blocks were sliced into sections of approximately 5 lm, which were flattened and baked at 70°C for 1 hour, followed by slicing and an additional round of baking at 60°C for 5. Next, the sections were incubated with rabbit anti-S100A9 antibody (ab92507, Abcam Inc., Cambridge, MA, USA) at 37°C for 1 hour. After an additional 3 washes with PBS (5 minutes each time), the sections were incubated with horseradish peroxidase (HRP) (0343-10000U, Beijing Imun Biotechnology Co., Ltd., Beijing, China) labelled streptavidin working solution at 37°C for 1 hour, followed by 3 further PBS washes (5 minutes each time). A 3,3 0 -diaminobenzidine (DAB, ST033, Guangzhou Whiga Technology Co., Ltd., Guangzhou, Guangdong, China) was used for colour development for a period of 3-10 minutes, and the samples were washed with double-distilled water (DDW) for 10 minutes after the reaction had been stopped. The sections were then counterstained with haematoxylin (Shanghai Fusheng Industrial Co., Ltd., Shanghai, China) for 1 minute and soaked in 1% hydrochloric acid-ethanol mixtures for 10 seconds. After washing with running water, the tissues were stained to turn blue for 10 seconds using 1% ammonia. Next, the samples were After centrifugation at 4°C (8000 rpm, 5 minutes) to discard the supernatant, the samples were dried by means of airing at room temperature and in certain cases, vacuumed for 5-10 minutes. DEPC (20 lL) was used to dissolve the precipitation, followed by determination of RNA concentration. The primer sequences were synthesized by Takara (Takara Biotechnology Co., Ltd., Dalian, China; Table 1), and then, reverse transcription was performed using the Reverse Transcription Kit (Beijing Transgen Biotechnology Co., Ltd., Beijing, China) in accordance with the manufacturer's instructions. The reaction conditions were as follows: 42°C for 30-50 minutes (reverse transcription) and 85°C for 5 seconds (enzyme deactivation). Reversely transcribed cDNA was diluted to 50 ng/lL (adding 2 lL each time), whereas the amplification system was 25 lL. The fluorescence quantitative PCR instrument (ViiA 7, Da An Gene Co., Ltd. of Sun Yat-Sen University, Guangdong, China) were adopted. The reaction condition of PCR included 40 cycles of pre-denaturation at 95°C for 10 minutes, denaturation at 95°C for 5 seconds, annealing and elongation at 60°C for 30 seconds. The 2 lg RNA was used as template and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the internal control. The 2 ÀDDCT 19 method was used to calculate the relative mRNA expressions of target genes (S100A9, IL-17, high-mobility group box 1 protein (HMGB1) and S100A12):MM Ct = MCt AP group À MCt normal group , MCt = Ct (target gene) À Ct (internal control) . Total RNA was extracted from the pancreas of the mice post-transfection and incubated for 48 h. The same procedure was conducted for all cell experiments. | Western blotting After weighing, the cooled pancreatic tissues from each group were placed in a glass grinder containing 1 mL ice-cold normal saline. After homogenization in an ice bath, the tissues were centrifuged (12 000 rpm/min) at 4°C for 20 minutes, and the supernatant was discarded. Next, 1 ml of lysates (including 50 mmol/LT ris, 150 mmol/L NaCl, 5 mmol/L ethylene diamine tetraacetic acid (EDTA), 0.1% sodium dodecyl sulphate (SDS), 1% NP-40, 5 g/mL Aprotinin and 2 mmol/L phenylmethanesulfonyl fluoride (PMSF)) was added to the tissues and triturated repeatedly to ead in order to allow the lysates to spread in an even manner. After the tissues were homogenized in an ice bath, the protein lysate was added for lysing at 4°C for 30 minutes, followed by periodic shaking at intervals of 10 minutes. The supernatants were obtained for further use plasmid expressing shRNA1 and shRNA2 were produced and transformed to E-coli DH5a. A total of 16 single colonies were selected and shRNA2 group (HPNE cells induced by 1 9 10 À8 mol/L cerulein added with 1010 IU/mL shRNA2-S100A9 lentivirus solution). Measurement data were expressed as mean AE standard deviation (SD). The comparisons between two groups were analysed by means of t test, whereas comparisons among multiple groups were performed using one-way analysis of variance (ANOVA). P < .05 was considered to be statistically significant. | RESULTS 3.1 | Strong positive expressions of S100A9 and IL- are found in pancreatic tissues The expression of S100A9 determined by IHC displayed a weakly positive expression in the normal, AP + shRNA and AP + anti IL-17 groups with light staining, but strongly positive expression in the AP, AP + NC and AP + IgG groups with a distinct increase in brown granules, of which were mainly expressed in the pancreatic ductal complex and interstitial inflammatory cells ( Figure 1A). The positive F I G U R E 1 Positive expressions of S100A9 and IL-17 in pancreatic tissues in each group determined by IHC. A, expression of S100A9 observed under the microscope (9400) and statistical analysis; B, expression of IL-17 observed under the microscope (9400) and statistical analysis; *P < .05 compared with the normal group; # P < .05 compared with the AP group; S100A9, S100 calcium-binding protein A9; IL-17, interleukin 17; IHC, immunohistochemistry; AP, acute pancreatitis expression of IL-17 showed the same tendency with that of S100A9 ( Figure 1B). Table 2). | S100A9 gene silencing blocks the activation of IL-17 signalling pathway in vivo Both RT-qPCR and Western blotting indicated that the mRNA and protein expressions of S100A9, TLR4, RAGR, IL-17, HMGB1 and S100A12 in the AP group were all higher than those in the normal group (all P < .05). The AP + shRNA group had significantly lower mRNA and protein expressions of S100A9, TLR4, RAGR and HMGB1, as well as an insignificant reduction in the expressions of IL-17 and S100A12 compared with the normal group. Furthermore, there were significantly lower mRNA and protein expressions of S100A9, TLR4, RAGR, IL-17 and HMGB1, while insignificant reductions in the expressions of S100A12 in the AP + anti IL-17 group. There was no significant difference detected among the AP, AP + NC and AP + IgG groups (P > .05; Figure 3A, B). 3.5 | S100A9 gene silencing blocks the activation of IL-17 signalling pathway in vitro In comparison with the control group, the blank, NC, shRNA1 and shRNA2 groups had increased mRNA and protein expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 (all P < .05). Compared with the blank and NC groups, the shRNA1 and shRNA2 groups had displayed notably decreased mRNA and protein expression of S100A9, TLR4, RAGE and HMGB1, as well as no significant declines in the expressions of IL-17 and S100A12. No significant difference was observed between the shRNA1 and shRNA2 groups ( Figure 4; P > .05). | S100A9 gene silencing increases cell proliferation Compared with the control group, cell proliferation in the blank, NC, shRNA1 and shRNA2 groups was reduced, when measured after 48 and 72 hours (all P < .05). However, no significant difference was found between the blank and NC groups at each point (all P > .05). Compared with the blank and NC groups, the shRNA1 and shRNA2 groups showed increased proliferation capacities both at 48 and 72 hours, whereas the proliferation capacities of the shRNA1 group were slightly higher than that in the shRNA2 group (P > .05; Figure 5). 3.7 | S100A9 gene silencing promotes cell cycle entry while decreasing cell apoptosis PI staining results illustrated in Figure 6A and B indicated that when compared with the control group, the percentage of cells at the G1 F I G U R E 2 Pathological morphology of pancreatic tissues in each group measured by HE staining (9200). HE, haematoxylineosin phase had increased, whereas reductions at the G2 and S phases were recorded in the blank, NC, shRNA1 and shRNA2 groups (all P < .05). There were no statistically significant differences observed between the blank and NC groups (all P > .05). Compared with the blank and NC groups, the percentage of cells decreased at the G1 phase; however, increases at G2 and S phases in the shRNA1 and F I G U R E 3 Relative mRNA and protein expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 in pancreatic tissues in each group examined by RT-qPCR and Western blotting. (A) mRNA expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 in pancreatic tissues in each group examined by RT-qPCR; (B) protein expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 in pancreatic tissues in each group examined by Western blotting; *P < .05 compared with the normal group; # P < .05 compared with the AP group; S100A9, S100 calcium-binding protein A9; TLR4, toll-like receptor 4; RAGE, receptor for advanced glycation end products; IL-17, interleukin-17; HMGB1, high-mobility group box 1 protein; S100A12, calgranulin (C) RT-qPCR, reverse transcription quantitative polymerase chain reaction; AP, acute pancreatitis F I G U R E 4 Relative mRNA and protein expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 in HPNE cells in each group determined by RT-qPCR and Western blotting. (A) protein expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 in HPNE cells in each group determined by Western blotting; (B) mRNA expressions of S100A9, TLR4, RAGE, IL-17, HMGB1 and S100A12 in HPNE cells in each group determined by RT-qPCR; *P < .05 compared with the control group; # P < .05 compared with the blank and NC group; S100A9, S100 calcium-binding protein A9; TLR4, toll-like receptor 4; RAGE, receptor for advanced glycation end products; IL-17, interleukin-17; HMGB1, high-mobility group box 1 protein; S100A12, calgranulin (C) RT-qPCR, reverse transcription quantitative polymerase chain reaction; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; AP, acute pancreatitis shRNA2 groups were observed (all P < .05). Among cells in the shRNA1 group decreases of cells at the G1 phase but increases at G2 and S phase in comparison with the shRNA2 group were observed. Annexin-V-FITC/PI double-staining results shown in Figure 6C and D revealed that comparisons of the apoptotic rate in the control | DISCUSSION The prognosis of AP is generally unfavourable, whereas the rate of recurrence is as high as 17%. Approximately 8% of AP patients will fall victim to chronic pancreatitis within a 5-year period. 20 Therefore, it is of significant urgency that more effective treatments are used to alleviate the issue of recurrence. Our findings provided evidence that S100A9 silencing inhibited the release of inflammatory cytokines, suppressed the proliferation and promoted apoptosis of pancreatic cells in a mouse model of AP, via the blockade of the IL-17 signalling pathway, thus highlighting the potential of S100A9 as a therapy target in the treatment of AP. Elevated expression of S100A9 has previously been detected in the progression of a number of inflammatory diseases, including psoriatic arthritis, 21 systemic lupus erythematosus 22 and inflammatory bowel disease. 23 Likewise, this was detected in our results, in which we identified increased expression of S100A9 in our AP mice models. As a member of the S100 family, with the exception of those affecting epithelial tissues, S100A9 maintains its regulatory influence on cellular processes including transcription, proliferation and differentiation. 24 In addition, combined with its heterodimer partner S100A8, S100A9 exerted growth-inhibitory and apoptosis-inducing effects in a variety of cells via the classical mitochondrial pathway. 25,26 Moreover, Li et al asserted that the overexpression of S100A9 could induce cell apoptosis and inhibit cell growth. 27 Therefore, during our study, it was inferred that S100A9 gene silencing could act to promote cell growth and inhibit cell apoptosis in AP. S100A8/S100A9 was shown to control the G2/M cell cycle checkpoint as well as the apparent dysregulation that occurred, leading to the loss of the checkpoint in head and neck squamous cell carcinoma. 28 During the process, p53, correlated with cell cycle, apoptosis and adipogenesis, can modulate S100A9 transcription. 29 Initially, S100A8/A9 enhanced the activity of PP2A phosphatase as well as p-Chk1 (Ser345) phosphorylation, leading to the inactivation of the G2/M Cdc2/cyclin B1 complex through the inhibitory phosphorylation of mitotic p-Cdc25C (Ser216) and p-Cdc2 (Thr14/Tyr15); followed by the decrease in the expression of Cyclin B1 and cell cycle arrest at the G2/M checkpoint, which ultimately resulted in the reduction in cell division and the negative regulation squamous cell carcinoma growth. 30 In a zinc-reversible manner, S100A8/A9 induced apoptosis in various human and mouse tumour cell lines, including colon cancer cell lines. 31 In a previous study reported by Schnekenburger et al, he and his team found that pancreatitis induced an increased level of S100A9 in the pancreas and the application of S100A8/A9 in mice induces pancreatic cell-cell contract dissociation which could trigger cell apoptosis. 32 Once the activation of the IL-17 signalling pathway is mediated by S100A9, HMGB1 and RAGE both of which are cell death biomarkers are upregulated, thus leading to cell apoptosis. 33 IL-17 is characterized by its ability to induce the expression of both cytokines and chemokines and has been reported to participate in the amplification of inflammatory responses. 34 The significant effects of IL-17 blockade have proved to be controversial, due to its weak functions in vitro, as on the one hand IL-6 secretin, nuclear factor-jB (NF-jB) or other pro-inflammatory, which were only activated under high levels of cytokines, whereas on the other hand IL-17 exhibited significantly potent synergy in its ability to link with other cytokines such as IL-1b and TNFa. 35 In the present study, we found that S100A9 exerted its effects by blocking the IL-17 signalling pathway. This was supported by a study reviewing the synovial fluid (SF) of rheumatoid arthritis (RA), which initially indicated that S100A9 level was closely associated with IL-17 and IL-6, the critical factor to induce T-helper (Th) 17 differentiations. 36 F I G U R E 5 Cell proliferation in each group evaluated by MTT assay. *P < .05 compared with the control group; # P < .05 compared with the blank and NC groups; NC, negative control; OD, optical density; MTT, 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-Htetrazolium bromide S100A8 and S100A9 are generally considered to be pro-inflammatory substances. 39 This was observed in the present study in that S100A9 silencing inhibited the release of pro-inflammatory cytokines. Both the pro-and anti-inflammatory functions of macrophages were primarily premised on the following factors: One was the stage of differentiation and the other was distinct mechanisms Cell cycle distribution and cell apoptosis measured by flow cytometry A and B, cell cycle distribution in each group; C and D, cell apoptosis rate in each group; *P < .05 compared with the control group; # P < .05 compared with the blank and NC groups; NC, negative control of activation. 40 Considering that S100A8 and S100A9 were less stable than S100A8/A9 heterodimers, S100A8/A9 heterodimers are usually referred to when discussing pro-inflammatory activities. 41 The main receptors for S100A8, S100A9 and calprotectin are TLR4, which represent the dominant receptor for the S100A8/S100A9 signalling pathway, as well as RAGE; however, the specific receptors and pathways for S100A8, S100A9 and calprotectin are mainly dependent on the cell type. 42 For example, activated microglia produces significantly greater levels of S100A9 in Alzheimer's disease. 43 A previous study indicated that both TLR4 and RAGE proteins were overexpressed in pancreatitis, as well as highlighting the ability of S100A9 to activate the IL-17 signalling pathway and regulate the expression of inflammatory factors by binding to the cell surface receptors TLR4 and RAGE proteins. 44 Once secreted, S100A8/ S100A9 has the potential to bind to TLR4, which displayed proinflammatory functions, and result in the up-regulation of pro-inflammatory cytokines, the activation of endothelial cells and macrophages. 36 In conclusion, the results of the present study demonstrated that S100A9 silencing inhibits the release of pro-inflammatory cytokines by blocking the IL-17 signalling pathway. This was evidentiary the established AP mouse model in this study. Cell proliferation was inhibited, and apoptosis conditions were enhanced. The results of this study provide an experimental basis for the use of S100A9based therapy in the treatment of AP. It should be noted that the mechanisms between S100A9 and the IL-17 signalling pathway require further analysis and further clinical trials are needed, in order to assess whether the key findings of this study can be applied to human beings.
5,189.8
2018-02-14T00:00:00.000
[ "Biology" ]
HTLV-1 p12 modulates the levels of prion protein (PrPC) in CD4+ T cells Introduction Infection with human T cell lymphotropic virus type 1 (HTLV-1) is endemic in Brazil and is linked with pro-inflammatory conditions including HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP), a chronic neuroinflammatory incapacitating disease that culminates in loss of motor functions. The mechanisms underlying the onset and progression of HAM/TSP are incompletely understood. Previous studies have demonstrated that inflammation and infectious agents can affect the expression of cellular prion protein (PrPC) in immune cells. Methods Here, we investigated whether HTLV-1 infection affected PrPC content in cell lines and primary CD4+cells in vitro using flow cytometry and western blot assays. Results We found that HTLV-1 infection decreased the expression levels of PrPC and HTLV-1 Orf I encoded p12, an endoplasmic reticulum resident protein also known to affect post-transcriptionally cellular proteins such as MHC-class I and the IL-2 receptor. In addition, we observed a reduced percentage of CD4+ T cells from infected individuals expressing PrPC, which was reflected by IFN type II but not IL-17 expression. Discussion These results suggested that PrPC downregulation, linked to both HTLV-1 p12 and IFN-γ expression in CD4+ cells, may play a role in the neuropathogenesis of HTLV-1 infection. The HTLV-1 viral genome encodes structural genes (Gag, Pol, and Env) and five open reading frames (orfs) collectively referred to as the pX region. Tax and Rex viral proteins are encoded by Orfs IV and III, respectively. The Tax protein has been identified as a potent activator of a variety of transcription pathways and has been related to T cell transformation (Currer et al., 2012;Mohanty et al., 2020). The Rex protein is associated with the post-transcriptional regulator of viral expression (Hidaka et al., 1988;Inoue et al., 1991). Orf I and Orf II encode genes whose functions are primarily associated with the modulation of host immune responses (Edwards et al., 2011;Moles et al., 2019;Sarkis et al., 2019;Omsland et al., 2020). Orf I encodes the endoplasmic reticulum (ER) resident p12 precursor protein, which can be further processed into p8 Koralnik et al., 1992Koralnik et al., , 1993Fukumoto et al., 2007), whereas Orf II encodes the p13 and p30 proteins (Ciminale et al., 1992;Koralnik et al., 1992;Ciminale et al., 1999;Bartoe et al., 2000;Nicot et al., 2004). The p12 protein is localized in the ER and Golgi complex (Koralnik et al., 1992), and it is required to efficiently infect primary human T lymphocytes (Albrecht et al., 2000) and dendritic cells in vitro and macaques in vivo (Valeri et al., 2010). Moreover, p12 promotes MHC class I proteasome degradation, reducing HTLV-1 antigen presentation, and recognition of HTLV-1-infected cells by cytotoxic CD8 + T cells (Johnson et al., 2001;Pise-Masison et al., 2014). HTLV-1 infection induces the activation of T lymphocytes, leading to spontaneous proliferation, expression of molecules with associated cellular activation, and production of pro-inflammatory cytokines (Prince et al., 1994;Ishikawa et al., 2013;Novaes et al., 2013;Coutinho et al., 2014). This phenomenon can be related to the transactivation of genes by Tax and by the ability of p12 to induce activation of the NFAT pathway (Albrecht et al., 2002) and the JAK/STAT5 pathway, thereby reducing the cell requirement of IL-2 (Nicot et al., 2001). While p12 increases T cell activation (Albrecht et al., 2002;Ding et al., 2002Ding et al., , 2003Kim et al., 2003Kim et al., , 2006, its cleavage product p8(I) is recruited to the immunologic synapse following TCR engagements, suggesting downregulation of TCR signaling (Fukumoto et al., 2009;Prooyen et al., 2010). The p8 protein gains access to neighboring cells via cellular conduit and favors cell-to-cell transmission by inducing T cell clustering through LFA-1 expression and conduit formation (Prooyen et al., 2010;Donhauser et al., 2020). The expression of p8 is also required in virus-infected cells for escape from immune recognition by cytotoxic CD8 + T cells (Pise-Masison et al., 2014;Gutowska et al., 2022), raising the hypothesis that p8, by entering CD8 cells and downregulating TCR, may affect the strength of the immunological synapse and CD8 + T cell killing. The prion protein (PrP C ) is a glycoprotein bound to the external surface of cells by a glycosylphosphatidylinositol (GPI) anchor. PrP C is well known in neuropathology since its post-translational conversion can lead to a transmissible spongiform encephalopathyassociated protein isoform (PrP Sc ) that is characterized by numerous chemical modifications including sialic acid residues attached to glycosyl inositol phospholipid anchor (Stahl et al., 1992). PrP C is highly conserved and found constitutively in cells such as those in the nervous and immune systems, where it is involved in a plethora of biological events, such as oxidative stress, cell survival and death, cell differentiation, cell adhesion, T cell activation, myelin maintenance, and synaptic transmission (Spielhaupter and Schätzl, 2001;Ford et al., 2002;Isaacs et al., 2006;Westergard et al., 2007;Linden, 2017). PrP C is expressed by many cells of the immune system such as T and B lymphocytes, dendritic cells, NK cells, macrophages, and monocytes (Dürig et al., 2000;Li et al., 2001;Thielen et al., 2001). In T lymphocytes, its expression increases with cell activation. Mabbott et al. (1997) demonstrated that lymphocytes from PrP C−/− mice have a reduced proliferation compared with T lymphocytes of wild-type (PrP C+/+ ) mice when stimulated by Concanavalin A (Mabbott et al., 1997). In humans, CD8 + T cells have a significantly higher PrP C expression than CD4 + T cells, and IFN-γ significantly upregulates PrP C expression on monocytes in a concentration and time-dependent manner (Dürig et al., 2000). Using goats naturally born without PrP C demonstrated that PrP C , Malachin et al. (2017) deficiency impacts interferon signaling by upregulating type I IFN genes. Moreover, after lipopolysaccharide injection, these animals exhibited longer persistence of clinical symptoms . Thus, the expression of PrP C might play an important role in viral and bacterial infections. The expression of the PrP C pattern was evaluated in the cerebrospinal fluid (CSF) obtained from HAM/TSP patients, but no significant difference was observed (Torres et al., 2012). However, recently Souza et al. (2021) demonstrated that higher levels of soluble PrP C were found in CSF in addition to higher levels of CCL2 and neopterin. The study of different elements of the immune response and the alterations induced by viral proteins is essential for the comprehension of neurological pathology. Thus, we analyzed whether HTLV-1 infection affects the content and expression level of PrP C in CD4 + T cells in vitro and in vivo in HTLV-1infected individuals. We found that HTLV-1 infection decreases PrP C levels in vitro via the expression of the HTLV-1 p12 protein encoded from the viral Orf I gene. In addition, the frequency of CD4 + cells expressing PrP C was decreased in CD4 + cells of peripheral blood from people living with HTLV-1 compared with non-infected individuals. Reduced PrP C levels did not correlate with CD25 + expression or proviral load in CD4 + T cells, suggesting other mechanisms may be involved. We also found higher IFN-γ expression in cells with reduced PrP C expression. Our studies suggest that the expression of PrP C is regulated in T lymphocytes upon HTLV-1 infection and may impact susceptibility to the development of HAM/TSP. MT-2 cells were irradiated at 2000 rad, using the Gammacell ® 220 Excell equipment. In parallel, 1×10 6 target cells (Jurkat) were stained with 1 μM CFSE (Life Technologies/Thermo Fisher Scientific) for 15 min at 37°C. After that, cells were washed two times with PBS and centrifuged at 200 g for 7 min. Irradiated MT-2 cells were co-cultured with Jurkat cells (5×10 5 ; 1:1). After 48 h or 96 h at 37°C in a humidified atmosphere with 5% CO 2 , CFSE-positive cells were sorted using Moflo cell sorter (Dako Cytomation/Beckman Coulter). Confirmation of infection was performed through PCR of the viral Tax gene. In addition, cells were labeled as described below, and the cells were analyzed by flow cytometry. PrP C expression was silenced in MT-2 cells using human PrP C siRNA (PrP siRNA(h):sc-36318; Santa Cruz Biotechnology). Cells were cultured in Optimem media (Thermo Fisher Scientific) supplemented with 10% FBS for 24 h at 37°C in a humid atmosphere with 5% CO 2 . The siRNA transfection was performed with 60-80% confluence of cultures, using Lipofectamine 2000 (Invitrogen/Thermo Fisher Scientific), according to the manufacturer's instructions. mRNA PrP C silencing was determined by qRT-PCR (Erdogan et al., 2013). p19 quantification by ELISA The production of HTLV-1 in the supernatants of the HTLV-1infected cell cultures was assessed by measuring the amount of p19 Gag protein by enzyme-linked immunosorbent assay, according to the manufacturer's instructions (Zeptometrix, Buffalo, NY, United States). The supernatant of cell cultures was centrifuged, and the cell pellet was discarded. Supernatants were treated with lysis buffer, mixed well, and added to pre-coated plates. The plate was incubated for 2 h at 37°C and washed six times using a washing buffer. HTLV-1 detector antibody was added and incubated for 1 h at 37°C. The plates were washed as previously described. Peroxidase working solution was added and incubated for 1 h at 37°C; plates were washed again as previously described. The substrate was added for 30 min at room temperature. A stop solution was added. Plates were read at 450 nm in a microplate reader. Frontiers in Microbiology 04 frontiersin.org 2.7. Immunofluorescence microscopy 2×10 5 cells were centrifuged at 800 rpm for 3 min on glass slides (Thermo Fisher Scientific Scientific, Waltham, MA, United States). Cytospins were fixed with 2% paraformaldehyde for 1 h at room temperature. Next, unspecific binding was blocked by incubation with blocking buffer (PBS + FBS 5%) for 2 h. Glass slides were washed three times and incubated overnight with SAF32 mAb diluted 1:50 (v/v). Following this, cytospins were washed three times and incubated with secondary antibody anti-mouse IgG conjugated to Alexa Fluor 647 (Invitrogen/Thermo Fisher Scientific) diluted 1:1000 (v/v) for 1 h at room temperature. Cytospins were washed three times and incubated with DAPI (2 mg/ml; Sigma-Aldrich). After 5 min at room temperature, glass slides were washed, and fluorescence microscopy was performed using a confocal microscope (Leica TCS SP5 com AOBS) with 63X objective. from Vetec, and Nonidet P-40 [1%] from Abcam) for 15 min and centrifuged at 10,000 g for 20 min at 4°C. The supernatant was harvested, and the total protein concentration was measured by the Lowry method, using albumin (Sigma-Aldrich/Merck) as standard. The protein extracts were separated by SDS-PAGE (NuPAGE™ 4-12% Bis-Tris Protein Gels, 1.0 mm, Thermo Fisher Scientific) for 1 h at 100A, approximately, and then transferred to a 7 cm × 8.4 cm, 0.45 μm pore size, hydrophobic PVDF (Immobilon-P PVDF, Millipore Sigma Millipore), previously activated with methanol for 10 s. After 1 h at 140 mA, the membranes were blocked with 5% milk in TBT-T for 1 h at room temperature. The membranes were incubated overnight at 4°C with primary antibodies SAF32 (1:500) or Anti-HA (1:1000) in PBS containing 0.1% Tween-20 (BioRad) and 0.25% milk. Membranes were washed in PBS + 0.1% Tween-20 and exposed to a secondary antibody anti-Mouse HRP (1:10,000, Sigma-Aldrich/Merck) for 1 h at room temperature. Protein levels were detected with SuperSignal West Pico Substrate or SuperSignal West Femto Substrate (Thermo Scientific Pierce), according to the manufacturer's instructions. The membranes were revealed in a dark chamber in X-ray film (CL-XPosure Film-Thermo Scientific) in a reveling cassette for 3 to 5 min. The PDVF membranes were stripped and relabeled with anti-β-tubulin (1:1000, Sigma-Aldrich/Merck) for loading control. Bands were quantified using ImageJ software. Real-time PCR (qRT-PCR) RNA extraction, cDNA synthesis, and qRT-PCR were performed according to Pinheiro et al. (2015). In brief, RNA was extracted using TRIzol ® reagent (Invitrogen/Thermo Fisher Scientific), according to the manufacturer's instructions. Following this, the cDNA was synthesized using the cDNA first strand synthesis kit (Fermentas), according to the manufacturer's instructions. The qPCR was performed using the primers for the human PrPc gene (prnp), forward 5′-ACCCACAGT CAGTGGAA-3′ and reverse 5′-TATGATGGGCCTGCTCA-3′ and gapdh, forward 5′-CCAGATCATGTTTGAGACC-3′ and reverse 5′-ATGTCACGCACGATTTCCC-3′ (Invitrogen/Thermo Fisher Scientific). The quantification of prnp expression was performed using SYBR Green Real-Time PCR Master Mix (Applied Biosystem/Thermo Fisher Scientific). The cDNA amplifications were performed in an ABI 7500 system (Applied Biosystems), under thermocycling conditions for the reaction followed by 50°C for 2 min, 95°C for 10 min, and 40 cycles of 95°C for 15 s and 60°C for 1 min. The expression level of each gene was normalized to the expression level of the endogenous control (gapdh). 2.10. Proviral load and detection of HTLV-1-infected cells DNA was extracted from PBMC with the QIAamp DNA blood mini kit (Qiagen), and DNA was eluted in 30 μl. HTLV-1 proviral load was determined by quantitative PCR in a Rotor-Gene Q instrument (Qiagen), using the Rotor-Gene Probe PCR kit (Qiagen), according to the manufacturer's instructions. Primers and 5′-FAM and 3′-TAMRAlabeled TaqMan ® probes (Sigma-Aldrich) for the HTLV-1 tax and the human β-globin genes, as previously described (Silva et al., 2007), were used in independent reactions with 5 μl of DNA. HTLV-1 proviral load was calculated as tax copies/(β-globin copies/2), and the results are shown as infected monocytes per 100,000 cells. Qualitative PCR was performed with 10 μl of DNA in 50 μl reactions using the HotStar Taq Plus PCR kit (Qiagen), following the manufacturer's instructions, using the same primers for HTLV-1 tax and human β-globin genes. The amplification cycle consisted of enzyme activation at 95°C for 5 min, 45 cycles of denaturation at 95°C for 30s, annealing at 60°C for 30s, extension at 72°C for 30s, and a final extension step at 72°C for 10 min. PCR products were electrophoresed in 2% agarose gel stained with GelRed ® (Biotium) in 1× Tris-Borate-EDTA buffer (Invitrogen) at 100 V for 90 min. Amplification of HTLV-1 tax results in a 159 bp PCR product. Statistical analysis The one-dimensional probability distributions of samples were analyzed by Kolmogorov-Smirnov test. After that, statistical analysis was performed by one-way analysis of variance followed by Bonferroni' s posttest for samples with normal distribution, or Kruskal-Wallis analysis followed by Dunn' s post-test. Statistical analysis was performed by unpaired t-test with Welch' s correction or Mann-Whitney U test to two groups of samples with normal or without normal distribution, respectively. Correlations were analyzed by Spearman' s or Pearson' s rank correlation coefficient. The statistical analysis was performed using GraphPad Prism 8 software and values of p < 0.05 were considered statistically significant. Reduced PrP C content in HTLV-1infected cell lines To assess whether the PrP C protein levels are affected by HTLV-1 infection, we evaluated by flow cytometry PrP C content in the wellestablished T lymphoid HTLV-1-infected cell lines, MT-2 and C91-PL, compared with the HLTV-1 negative immortalized T cell line Jurkat. Figure 1A and Supplementary Figure S1). Moreover, the HTLV-1infected cells exhibited a lower mean fluorescence intensity, indicating a reduced level of PrP C per cell ( Figure 1B). These results were confirmed by Western blot and fluorescence microscopy. Evaluation of HTLV-1-infected cell lysates corroborated the results found by flow cytometry (Figure 1C), as well as fluorescence microscopy assays where MT-2 cells presented reduced levels of PrP C compared to Jurkat cells ( Figure 1D). TL-Om-1 and ED40515(−) cells are HTLV-1infected T cell lines established from ATL patients (Maeda et al., 1985;Forlani et al., 2021). However, these cells do not express Tax or release virus as shown by low levels of p19 release into the supernatant ( Figure 1E). Similar to MT-2 and C91-PL, ED40515(−) presented reduced PrP C levels compared to Jurkat, but TL-Om-1 cells expressed levels similar to Jurkat ( Figure 1F). These results suggest that Tax expression may not be linked to PrP C content. To determine if reduced PrP C protein levels were related to reduced mRNA expression, we used RT-PCR to analyze PrP C gene expression (prnp). Interestingly, the downregulation of PrP C content did not depend on the inhibition of prnp gene transcription. The levels of PrP C mRNA were similar in infected and uninfected cells ( Figures 1G,H), suggesting that the PrP C expression was regulated post-transcriptionally. Altered percentage of CD4 + PrP C+ cells of HAM/TSP and AC patients Next, we evaluated if PrP C levels were also downregulated in CD4 + T lymphocytes obtained from the peripheral blood of people living with HTLV-1. We observed by flow cytometry that CD4 + T lymphocytes from infected individuals, asymptomatic carriers (AC) or those with HAM/TSP, display a significant reduction in the percentage of cells that express PrP C compared with cells from non-infected individuals (Figure 2A). We observed that approximately 72% of CD4 + T cells of AC donors and 74% of CD4 + T cells of HAM/ TSP patients expressed PrP C , while 92% of CD4 + T cells of non-infected individuals expressed PrP C . Similarly, HAM/TSP and AC patients presented similar percentages of CD4 + PrP C+ lymphocytes. However, we did not observe any reduction in PrP C amount per cell, suggesting that the PrP C+ CD4 + T lymphocytes from people living with HTLV-1 expressed equivalent levels of this protein ( Figure 2B). Moreover, a correlation analysis between the percentage of PrP C+ (SAF32) and the proviral load was performed, but no correlation was found ( Figure 2C). The results suggest that HTLV-1 infection alters PrP C levels in stably infected cell lines and peripheral blood CD4 + T lymphocytes from infected individuals. In CD4 + T lymphocytes, the downregulation of PrP C also did not depend on the inhibition of prnp gene transcription ( Figure 1H), reinforcing that the PrP C was regulated post-transcriptionally. As described by Prince et al. (1994), HTLV-1 infection induces the activation of CD4 + T lymphocytes and presents a phenomenon of spontaneous proliferation, which is accompanied by increased expression of the α chain of the IL-2 receptor, CD25 (Al-Fahim et al., 1999;Novaes et al., 2013). Therefore, we investigated the percentage of PrP C+ in CD4 + CD25 + cells from people living with HTLV-1. Consistent with those results, we detected a higher percentage of CD4 + CD25 + lymphocytes in PBMCs from HTLV-1 carriers compared to controls ( Figure 2D). Interestingly, we did not observe differences between the percentage of CD4 + CD25 + of PrP C -negative (PrP Cneg ) and PrP C -positive (PrP C+ ) lymphocytes obtained from people living with HTLV-1 ( Figure 2E). HTLV-1 infection modulates PrP C To confirm the effect of HTLV-1 infection on PrP C content, Jurkat cells were infected by co-culture with MT-2 or C91-PL cells, as described in the methodology section. As shown in Figures 3A,B, we confirmed the infection of target cells (Jurkat) by PCR for the Tax gene and the upregulation of CD25 expression, respectively. Surprisingly, 48 h after the target cell infection, a significant reduction in the percentage of PrP C+ cells (approximately 30%) was detected. This decrease was also detected at 96 h post-infection ( Figure 3C), indicating that the infection may be directly related to the decrease in PrP C expression in HTLV-1-infected cells. Additionally, we compared the Tax expression in MT-2 PrP Cneg cells with MT-2 PrP C+ cells (sorted by FACS) by qRT-PCR. The results indicate that MT-2 PrP Cneg cells expressed higher levels of Tax than MT-2 PrP C+ cells, reinforcing the relation between HTLV-1 infection and PrP C modulation ( Figure 3D). Expression of the Orf I encoded HTLV-1 p12 protein is associated with decreased PrP C levels The HTLV-1 viral genome encodes five orfs, among which Orf I encodes for the p12 protein that can subsequently be processed into p8 (Koralnik et al., 1992). The p12 and p8 proteins are involved in several functions in the infected cell, including the transmission of the virus to target cells and the regulation of host cell proteins (Albrecht et al., 2000;Nicot et al., 2004;Prooyen et al., 2010;Valeri et al., 2010;Sarkis et al., 2019). Because PrP C is associated with cell activation and inflammation, we investigated the role of p12 and p8 in PrP C expression. To achieve this goal, we used stably infected, virusproducing B cell line 729.6 (pAB wild-type virus, namely; 729.6 D26) Frontiers in Microbiology 06 frontiersin.org PrP C levels in cell lines. 5×10 5 cells of different cell lines were stained with the antibody SAF32 as described in the methods section. (A,B) PrP C levels were analyzed by flow cytometry in Jurkat cells (non-infected cells), MT-2 cells, and C91-PL cells (HTLV-1-infected cells) to compare the percentage of PrP C+ and the mean fluorescence intensity (MFI) for PrP C labeling in different cells. Statistical analysis was performed using the ANOVA statistical test following Bonferroni's post-test. The means were considered significantly different when *p < 0.05, **p < 0.002, and ***p < 0.0001. (C) 30 μg of total proteins from 10 6 Jurkat, MT-2, and C91-PL cells were separated in SDS-PAGE and transferred to nitrocellulose membrane to analyze PrP C content by Western blot, using α-tubulin as a constitutive protein as described above. (D) 2×10 5 Jurkat and MT-2 cells were centrifuged in a cytospin, fixed with 2% paraformaldehyde and blocked with PBS + 5% FBS. Cells were incubated overnight with SAF32 antibody and then incubated with anti-mice IgG secondary antibody conjugated with Alexa Flour 647 as described above. The nuclear stain was performed with DAPI (2 mg/ml). Microscopy was performed using a fluorescence microscope with a 63× objective. and Orf I mutants: 729.6 N26 cells were characterized as a mutant that predominantly expresses the p8 protein; 729.6 G29S cells express predominantly p12 protein; and 729.6 Δp12 cells do not express p8 or p12 (Valeri et al., 2010). Like HTLV-1-infected Jurkat cells, 729.6 D26 showed a decreased percentage of PrP C+ cells compared to control cells ( Figure 4A). This effect was observed in the p12-expressing 729.6 G29S cells, where a reduction of approximately 24% PrP C+ was detected. However, no significant reduction in PrP C+ cells was observed in 729.6 N26 cells or 729.6 Δp12 cells compared with the control parental 729.6 cells or Jurkat cells ( Figure 4A). To confirm that p12 was important for PrP C reduction, Jurkat cells were transfected with Orf I expression plasmids, wild-type plasmid (WT Orf I sequence), and G29S plasmid (mutant sequence of Orf I, which predominantly induces p12 production) (Fukumoto et al., 2009). The levels of PrP C were measured in Jurkat cells transfected with the WT or G29S plasmids. We detected a reduction in the PrP C protein levels in WT and G29S-expressing Jurkat cells compared with controls ( Figure 4B). Together, these results suggested that the p12 protein may play a relevant role in the downregulation of PrP C in HTLV-1-infected cells. Silencing of PrP C does not affect HTLV-1 expression Finally, we investigated if reduced PrP C expression affects production of the viral structural protein Gag. We used human PrP C siRNA to target PrP C expression in MT-2 cells. The siRNA to PrP C significantly reduced PrP C (SAF32) protein and mRNA expression compared with the FITC control or mock-transfected cells cytometry in Jurkat cells (non-infected cells), MT-2 cells, C91-PL cells, and HTLV-1-infected cells without viral particle release (p19 neg/low ) as TL-Om-1 and ED40515(−) to compare the percentage of PrP C+ . Statistical analysis was performed using the ANOVA statistical test following Bonferroni's posttest. The means were considered significantly different when *p < 0.05 and **p < 0.002. (G) Total RNA was extracted from 10 6 cell lines (Jurkat, MT-2, and C91-PL) or (H) sorted CD4 + T lymphocytes from PBMCs of non-infected cells and HTLV-1 carriers (HTLV-1 + ). RNA was extracted and then subjected to an RT-PCR reaction with random primers to obtain cDNA. With the cDNA, qPCR reactions were performed with specific primers for the gapdh mRNA (housekeeping gene) and prnp (PrP C gene). FIGURE 2 Percentage of CD4 + PrP C+ cells from people living with HTLV-1. (A,B) 5×10 5 cells of PBMCs obtained from non-infected individuals (NI, n = 27), asymptomatic carriers (AC, n = 26), and HAM/TSP (n = 24) patients were stained with the antibody SAF32 and lineage-specific antibodies as described in the methods section. PrP C levels were analyzed by flow cytometry in CD4 + cells from three groups to compare the percentage of PrP C+ and the mean fluorescence intensity for PrP C labeling in different groups. ***p=0.0008 determined by Kruskal-Wallis test. (C) Spearman's correlation test between the percentage of PrP C in CD4 + cells and proviral load, r = 06792 and p = 06465. (D) Percentage of CD25 + in CD4 + cells from PBMCs of people living with HTLV-1 (n = 15) and non-infected donors, n = 9. **p = 0.0013 determined by Mann-Whitney test. (E) Percentage of CD25 + in PrP Cneg CD4 + cells or PrP C+ CD4 + cells from PBMCs of people living with HTLV-1, n = 15. (F) Percentage of IFN-γ + in PrP Cneg CD4 + cells or PrP C+ CD4 + cells from PBMCs of people living with HTLV-1, n = 16. *p = 0.0228 determined by Mann-Whitney test. (G) Percentage of TNF-α + in PrP Cneg CD4 + cells or PrPC + CD4 + cells from PBMCs of people living with HTLV-1, n=6. (p=0.07 determined by Mann-Whitney test). (H) Percentage of IL-17 + in PrP Cneg CD4 + cells or PrP C+ CD4 + cells from PBMCs of people living with HTLV-1, n = 11. Each symbol represents one donor, and the bar indicates the median value. Figure 5A). However, PrP C silencing did not affect virus expression measured as p19 Gag in the supernatant ( Figure 5B). Discussion PrP C is involved in the pathogenesis of neurodegenerative diseases, such as dementia with Lewy bodies and Pick's disease, inducing aggregates with α-synuclein, amyloid β aggregates, and tau protein (Corbett et al., 2020). Furthermore, increased levels of soluble PrP C have been detected in the cerebrospinal fluid obtained from HIV-1-infected individuals with symptoms of cognitive disorders. This effect was also observed in the cerebrospinal fluid of monkeys infected with SIV, an animal model for the comparative study of infection with HIV-1, which correlated with the worsening of the severity of encephalopathy (Roberts et al., 2010). The biology of PrP C was modulated by inflammatory cytokines and chemokines, such as IL-6, TNF-α, IL-8, and CCL4/MIP-1β, among others (Stoeck et al., 2014), which are also observed in patients with HAM/TSP (Champs et al., 2019;Souza et al., 2021;Freitas et al., 2022). Our study has pioneered the investigation of PrP C expression in CD4 + T lymphocytes from people living with HTLV-1. Our results demonstrate that infected individuals, both asymptomatic and HAM/ TSP patients, have a reduced percentage of PrP C+ CD4 + T cells, representing a reduction between 20 and 25% compared with non-infected individuals. Corroborating these findings, Souza et al. (2021) demonstrated that patients with HAM/TSP have increased free PrP C in the cerebrospinal fluid compared with asymptomatic patients, suggesting a shedding of PrP C symptomatic individuals. The PrP C shedding occurs during lymphocyte activation, cell-cell contact, and apoptosis . In addition, CCL2 and TNF-α (200 ng/ mL and 10 ng/mL) stimulation induces PrP C shedding in astrocytes in vitro (Megra et al., 2017). The authors connected PrP C shedding PrP C levels in target cells after HTLV-1 infection. 10 6 Jurkat cells (target cells) were incubated with 1 μM of CFSE for 15 min at 37°C and washed two times with PBS. Next, CFSE + target cells were incubated with 10 6 cells of the MT-2 or C91-PL lineage, previously irradiated (2000 rads). (A) After 48 h or 96 h, PCR was used to detect a fragment of 159-bp of HTLV-1 tax or β-globin genes in DNA samples obtained from Jurkat cells (J), MT-2 cells (not irradiated) and CFSE + Jurkat cells co-cultivated with MT-2 cells (J + MT-2). L = ladder. (B) 5×10 5 cells of different cell lines were stained with the antibody anti-human CD25 conjugated with PE or (C) SAF32 as described in the methods section. PrP C expression was analyzed by flow cytometry in Jurkat cells (non-infected target cells), Jurkat cells co-cultivated with MT-2 cells, and C91-PL cells to compare the fold change in the percentage of PrP C+ after HTLV-1 infection. Statistical analysis was performed using the ANOVA statistical test following Bonferroni's post-test. The means were considered significantly different when **p < 0.0013. (D) MT-2 cells were stained with SAF32 antibody as described above to isolate PrP Cneg cells and PrP C+ cells by FACS. RNA of PrP Cneg cells and PrP C+ cells was extracted and then subjected to an RT-PCR reaction with random primers to obtain cDNA. With the cDNA, qPCR reactions were performed with specific primers for the gapdh mRNA (housekeeping gene) and tax gene. The means of two independent experiments are represented in the graph and each symbol (○, •) corresponds to one experiment. Frontiers in Microbiology 09 frontiersin.org Role of p12 viral protein in PrP C reduction. 5×10 5 cells of Jurkat cells, B cell line 729.6, 729.6 D26 cells stably infected with pAB wild-type virus, 729.6 N26 (mutant that predominantly expresses the p8 viral protein), 729.6 G29S cells (predominantly expressing the p12 protein), and the 729.6 Δp12 cells (not expressing p8 or p12) were stained with the antibody SAF32 as described in the methods section. (A) PrP C levels were analyzed by flow cytometry in Jurkat cells and 729.6 cells (non-infected cells) and transfected cells to compare the percentage of PrP C+ . Statistical analysis was performed using the ANOVA statistical test following Bonferroni's post-test. The means were considered significantly different when *p < 0.05. (B) 30 μg of total proteins from 10 6 Jurkat cells or Jurkat cells transfected with cDNA plasmids from the Orf I region, wild-type plasmid (WT Orf I sequence), or G29S plasmid (mutant sequence of Orf I, which predominantly induces p12 production), cells without transfection (negative, N) and cells transfected with Pmax-GFP plasmid (P) were used as a control. Cells were separated in SDS-PAGE and transferred to nitrocellulose membrane to analyze PrP C content by Western blot, using α-tubulin as a constitutive protein as described above. Effect of PrP C silencing in HTLV-1-infected cells. (A) 10 6 MT-2 cells were transfected with siRNA for PrP C using lipofectamine 2000 and incubated for 24 h in RPMI medium at 37°C in a humid atmosphere with 5% CO 2 . Next, the cells were lysed with RIPA buffer, then 40 μg of total proteins were run on SDS-PAGE and nitrocellulose membrane transfer. The membrane was labeled with SAF32 and anti-α-tubulin antibodies. Relative densitometry of SAF32 in relation to α-tubulin was performed using ImageJ software. (B) p19 concentration in the supernatant of PrP C -silenced MT-2 cells determined by ELISA. Mean ± SEM from two independent experiments was represented in the graphic. Frontiers in Microbiology 10 frontiersin.org with ADAM10 metalloprotease activation induced by these cytokines (Megra et al., 2017). In PBMCs stimulated with a low dose of TNF-α (1 ng/mL), we did not observe a reduction in the percentage of PrP Cpositive cells (data not shown). Using HTLV-1-infected cells (MT-2, C91-PL, and ED40515[−]), we observed a decrease in the percentage of PrP C+ cells and protein levels. The results were confirmed by in vitro infection of Jurkat and 729.6 cells, which also promote a reduction in PrP C levels. No differences were found in the prnp gene transcript levels in CD4 + T cells obtained from infected and uninfected individuals, or in cell lines, suggesting that PrP C expression is regulated in infected cells by a post-transcriptional event. Thus, we reasoned that the downmodulation of PrP C could be a result of the activity of viral proteins such as p12 and p8. Using cells transfected with different constructs, we demonstrated that the p12 viral protein is related to the downregulation of PrP C . In agreement with our findings, it has already been described that p12 induces a decrease in the expression of other molecules in cells infected by HTLV-1; p12 is capable of binding to the heavy chain of the MHC-I molecule in the rough endoplasmic reticulum and preventing association with β-2 microglobulin (mβ-2). The absence of binding of the MHC-I heavy chain with mβ-2 induces the translocation of p12-associated MHC-I to the proteasome in the cytosol, promoting the degradation of the complex. Consequently, the infected cell presents a reduction in the expression of MHC-I on the cell surface and reduces the presentation of HTLV-1 antigens, making it less susceptible to the action of cytotoxic cells (Johnson et al., 2001;Pise-Masison et al., 2014). Using a reversible inhibitor of the proteasome, MG-132 (5 mM), in transfected Jurkat cells with WT p12 plasmid (D26) we did not observe any modification in PrPC levels (Supplementary Figure 4). In addition, the p12 viral protein alters the distribution and expression of LFA-1 and ICAM-1. The expression of these adhesion molecules occurs in cholesterol-rich regions of the plasma membrane (lipid raft domain), as does the PrP C (Kim et al., 2006;Banerjee et al., 2007;Westergard et al., 2007;Prooyen et al., 2010). It was previously reported that PrP C expression in HEK293 cells reduces the expression of HIV Pr55Gag and viral particle production (Leblanc et al., 2004). The anti-HIV properties of PrP C were linked to its binding to the viral genome and reducing translation (Alais et al., 2012). Thus, the reduction of PrP C favors HIV replication. Moreover, PrP C dysregulation was detected in cognitively impaired HIV-1-infected individuals, suggesting its contribution to the pathogenesis of HIV-1-associated CNS disease. Indeed, increased levels of soluble PrP C were observed in the cerebrospinal fluid of patients with HIV-associated neurocognitive impairment. In addition, after in vitro PrP C addition to cultures, an increase of both CCL2 and IL-6 production by astrocytes was reported, suggesting that PrP C is a biomarker of HIV-associated neurocognitive impairment (Roberts et al., 2010). In contrast, the reduction of PrP C in HTLV-1-infected cells was not associated with decreased viral production. PrP C silencing neither altered the production of HTLV-1 viral particles in MT-2 cells nor significantly impacted viral transmission to Jurkat cells. It is well known that HTLV-1 infection induces the activation of T lymphocytes, leading to spontaneous proliferation, expression of molecules associated with cell activation, and the production of pro-inflammatory cytokines (Prince et al., 1994;Novaes et al., 2013;Coutinho et al., 2014;Futsch et al., 2018). Studies using the murine experimental encephalomyelitis (EAE) model have shown that PrP C -deficient animals (knockout or silenced) show increased transcripts and secretion of pro-inflammatory cytokines such as IL-17 and IFN-γ, as well as a significant enhancement in the expression of transcription factors Tbet and RORγt (Tsutsui et al., 2008;Hu et al., 2010). Consistent with those findings, we found an increase in the percentage of IFN-γ + cells in the PrP C -negative CD4 + cell population obtained from people living with HTLV-1. In conclusion, we have shown that HTLV-1 infection induces a reduction in PrP C levels, linked to viral protein p12. Moreover, a reduction of PrP C was also observed in lymphocytes from people living with HTLV-1, significantly higher in IFN-γproducing cells. These findings may be linked to increased levels of PrP C in CSF, suggesting that it could be included as a biomarker for HAM/TSP. Data availability statement The original contributions presented in the study are included in the article/Supplementary material. Further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by the Comitê de Ética em Pesquisa do Instituto Nacional de Infectologia Evandro Chagas -INI/FIOCRUZ. The patients/ participants provided their written informed consent to participate in this study. Author contributions RL, IS, GF, CP-M, and JE-L designed the experiments, analyzed the data, and wrote the manuscript with the collaboration of all authors. IS, AG, and RM carried out experiments and analyzed the data. OE carried out assays of HTLV-1 proviral load and obtained ethics approval. ML and AL carried out clinical evaluation and supplied blood samples. All authors reviewed and approved the manuscript.
8,358
2023-08-10T00:00:00.000
[ "Medicine", "Biology" ]
A Neural Network Approach to Value R&D Compound American Exchange Option In this paper we show as the neural network methodology, coupled with the Least Squares Monte Carlo approach, can be very helpful in valuing R&D investment opportunities. As it is well known, R&D projects are made in a phased manner, with the commencement of subsequent phase being dependent on the successful completion of the preceding phase. This is known as a sequential investment and therefore R&D projects can be considered as compound options. In addition, R&D investments often involve considerable cost uncertainty so that they can be viewed as an exchange option, i.e. a swap of an uncertain investment cost for an uncertain gross project value. Finally, the production investment can be realized at any time before the maturity date, after that the effects of R&D disappear. Consequently, an R&D project can be considered as a compound American exchange option. In this context, the Least Squares Monte Carlo method is a powerful and flexible tool for capital budgeting decisions and for valuing American-type options. But, using the simulated values as “targets”, the implementation of a neural network allows to extend the results for any R&D valuation and to abate the waiting time of Least Squares Monte Carlo simulation. Introduction R&D investments are considered an important driving force for the growth of the modern economy. For the analysts, is very important to value these investments considering their uncertainty. As it is well-know, R&D projects are characterized by the sequentiality of their investments and by the flexibility to realize the production investment at any time before the expiration time of R&D innovation. In this scenario, the real option approach can capture these aspects, unlike the Net Present Value (NPV) and the Internal Rate of Return (IRR) that underestimate R&D projects. In particular, the R&D projects can be considered as compound American exchange option (CAEO) in which both the gross project value and the investment cost are uncertain. Papers that deal with exchange option valuation are Margrabe (1978), McDonald and Siegel (1985), Carr (1988), Carr (1995), Armada et al. (2007) and so on. In particular way, McDonald and Siegel (1985) value a simple European exchange option, Carr (1988) develops a model to price a compound European exchange option while Armada et al. (2007) propose a Richardson extrapolation in order to value a simple American exchange option. These models consider that assets distribute ''dividends'' that, in real options context, are the opportunity costs if an investment project is postponed Myers (1977). However, the analytical computation of CAEO is more difficult and it is convenient to implement a numerical method. Numerical approximation is therefore an important task as witnessed by the contributions of Tilley (1993), Barraquand and Martineau (1995), Broadie and Glasserman (1997). The first goal of our paper is to implement a Monte Carlo methodology in order to value a CAEO applied in the context of R&D investment. To realize this objective, based on Cortelezzi and Villani (2009) and Villani (2014), we present the Least Square Monte Carlo (LSM) proposed by Longstaff and Schwartz (2001) in order to value the CAEO. Despite this approach is valid in term of accuracy, the time required for the simulation of this kind of option is very long. Consequently, the second aim of this paper is to build a neural network architecture based on Back Propagation (BP) system using the simulation results as ''targets'' in the learning phase. As there is not a market valuation of CAEO, the advantages of this approach are first of all the speed and the accuracy in the computations and, after that, the possibility to extend the trained neural network to value any R&D investment project. To highlight our method, we compare BP approach with Radial Basis Function (RBF) and General Regression (GRNN). The computing power has allowed nonlinear methods to become applicable to modeling and forecasting a host of economic and nancial relationships. Neural networks, in particular, have been applied to many of these empirical cases. For instance, Aminian et al. (2006) compare the predictive power of the linear regression model against the fully generalized nonlinear neural network, with the improvement exposing the degree of nonlinearity present in the relationship investigated. Their study uses neural networks as an efcient nonlinear regression technique to assess the validity of linear regression in modeling nancial data. Andreou et al. (2006) show that the artificial neural network models with the use of the Huber function outperform the ones optimized with least squares. Eskiizmirliler et al. (2020) approximate the unknown function of the option value using a trial function, which depends on a neural network solution and satisfies the given boundary conditions of the Black-Scholes equation. Arin and Ozbayoglu (2020) develop hybrid deep learning based options pricing models to achieve better pricing compared to Black-Scholes. The results indicate that the proposed models can generate more accurate prices for all option classes. RBF method as a meshless technique is suggested to solve time fractional Black-Scholes model for European option pricing problem Golbabai et al. (2019). The literature that studies the real option in the neural network context is not very extensive. For instance, Ma (2016) based on real options method to construct a petroleum exploration and development projects, select the appropriate option pricing method and instance data analyzed by gas exploration and point out that the application of real option method can effectively improve the investment project evaluation. Moreover, Taudes et al. (1998) propose to use neural networks to value options approximating the value function of the dynamic program showing for each mode of operation the current state as input and yielding the mode to be chosen as output. The paper is organized in this fashion. Section 2 presents the structure of an R&D investment and its evaluation in term of real option while, Sect. 3, illustrates the valuation of CAEO using the LSM approach. Moreover, the implementation of the neural network architecture is realized in Sect. 4 and some numerical applications are proposed in Sect. 5. Finally, Sect. 6 concludes. R&D Structure as Real Option In this section, we present a two-stage R&D investment which structure is the following: R is the research investment spent at initial time t 0 ¼ 0; IT is the investment technology to develop innovation payed at time t 1 ; D is the production investment in order to obtain the R&D project's value and V is the R&D project value. Let assume that IT ¼ qD is a proportion q of asset D, so it follows the same stochastic process of D and the production investment D can be realized between t 1 and T. In particular way, investing R at time t 0 , the firm obtains a first investment opportunity that can be value as a CAEO denoted by CðS k ; IT; t 1 Þ. This option allows to realize the investment technology IT at time t 1 and to obtain, as underlying asset, the option to realize the market launch. Let denote by S k ðV; D; T À t 1 Þ this option value at time t 1 , with maturity date T À t 1 and exercisable k times. In detail, during the market launch, the firm has got another investment opportunity to invest D between t 1 and T and to receive the R&D project value V. Specifically, using the LSM approach, the firm must decide whether to invest D or to wait at any discrete time s k ¼ t 1 þ kDt, for k ¼ 0; 1; 2; Á Á Á h with Dt ¼ TÀt 1 h and h is the number of discretizations. In this way we capture the managerial flexibility to invest D before the maturity T and so to realize the R&D cash flows. Figure 1 depicts the R&D investment structure. We assume that V and D have the following geometric Brownian motion: where l v and l d are the expected rates of return, d v and d d are the corresponding dividend yields, r 2 v and r 2 d are the respective variance rates, q vd is the correlation between changes in V and D, ðZ v t Þ t2½0;T and ðZ d t Þ t2½0;T are two Brownian processes defined on a filtered probability space ðX; A; fF t g t ! 0 ; PÞ, where X is the space of all possible outcomes, A is a sigma-algebra, P is the probability measure and fF t g t ! 0 is a filtration with respect to X space. Assuming that the firm keeps a portafolio of activities which allows it to value activities in a risk-neutral way, the dynamics of the assets V and D under the risk-neutral martingale measure Q are given by: Cov dZ Ãv t ; dZ Ãd where r is the risk-free interest rate, Z Ãv t and Z Ãd t are two Brownian standard motions under the probability Q with correlation coefficient q vd . After some manipulation, we get the equations for the price ratio P ¼ V D and D T under the probability Q: where D 0 is the value of asset D at initial time. We can observe that U ðÀ ffiffiffi ffi T p Þ and therefore expðUÞ is log-normal distributed whose expectation value E Q ½expðUÞ ¼ 1. By Girsanov's theorem, we define a new probability measure Q $ equivalent to Q whose Radon-Nikodym derivative is: Hence, substituing in (8) we can write: By the Girsanov's theorem, the evolution of processes: are two Brownian motions under the risk-neutral probability space ðX; A; F ; Q $ Þ and Z 0 is a Brownian motion under Q $ independent ofẐ d . By using Eqs. (11) and (12), we can now obtain the risk-neutral price ratio P: where Valuation of CAEO Using LSM Method The value of CAEO can be determined as the expectation value of discounted cashflows under the risk-neutral probability Q: Assuming the asset D as numeraire and using Eq.(10) we obtain: where IT ¼ q D t 1 . The market launch phase S k ðP t 1 ; 1; T À t 1 Þ can be analyzed using the LSM method. Like in any American option valuation, the optimal exercise decision at any point in time is obtained as the maximum between immediate exercise value and expected continuation value. The LSM method allows us to estimate the conditional expectation function for each exercise date and so to have a complete specification of the optimal exercise strategic along each path. The method starts by simulating n price paths of asset P t 1 defined by Eq. (13) . . .; n the simulated prices. Starting from each i th simulated-path, we begin by simulating a discretization of Eq. (13) for k ¼ 1; . . .; h. The process is repeated m times over a time horizon T. Starting with the last j th priceP i;j T , for j ¼ 1. . .m, the option value in T can be computed as S 0 ðP i;j T ; 1; 0Þ ¼ maxðP i;j T À 1; 0Þ. Working backward, at time s hÀ1 , the process is repeated for each j th path. In this case, the expected continuation value may be computed using the analytic expression for an European option S 1 ðP i;j s hÀ1 ; 1; DtÞ. Moving backwards, at time s hÀ1 , the management must decide whether to invest or not. The value of the option is maximized if the immediate exercise exceeds the continuation value, i.e.: We can find the critical ratio P à s hÀ1 that solve the inequality (16): But it is very heavy to compute the expected continuation value for all previous time and so to determine the critical price P à s k ; k ¼ 1; . . .; h À 2, as it is shown in Carr (1995). The main contribution of the LSM method is to determine the expected continuation values by regressing the subsequent discounted cash flows on a set of basis functions of current state variables. As described in Abramowitz and Stegun (1970), a common choice of basis functions are the weighted Power, Laguerre, Hermite, Legendre, Chebyshev, Gegenbauer and Jacobi polynomials. In our paper we consider as basis function a three weighted Power polynomial. Let be L w the basis of functional forms of the state variableP i;j s k that we use as regressors. We assume that w ¼ 1; 2; 3. At time s hÀ1 , the least square regression is equivalent to solve the following problem: The optimalâ ¼ ðâ 1 ;â 2 ;â 3 Þ is then used to estimate the expected continuation value along each pathP i;j s hÀ1 ; j ¼ 1; . . .; m: After that, the optimal decision for each price path is to choose the maximum between the immediate exercise and the expected continuation value. Proceeding recursively until time t 1 , we have a final vector of continuation values for each price-pathP i;j s k that allows us to build a stopping rule matrix in Matlab that maximizes the value of American option. As consequence, the ith option value approximationŜ i k ðP i t 1 ; 1; T À t 1 Þ can be determined by averaging all discounted cash flows generated by option at each date over all paths j ¼ 1; . . .; m. Finally, it is possible to implement Monte Carlo simulation to approximate the CAEO: ''Appendix A'' illustrates the complete Matlab algorithm to value CAEO. We conclude that, applying real option methodology, the R&D project will be realized at time t 0 if CðS k ; IT; t 1 Þ À R is positive, otherwise the investment will be rejected. Feed-Forward Neural Networks to Value CAEO In this section we describe the neural network architecture in order to value the CAEO and in particular a BP system in which the input layer is composed by n ¼ 10 nodes, one for each variable: and is composed by one hidden layer with p ¼ 6 nodes, as it shown in Fig. 2. Following Eskiizmirliler et al. (2020), we describe the BP neural network. The yellow circles are the ten input parameters above described, the blue ones are the six nodes in the hidden layer and the pink node denotes the CAEO output given by BP structure. Moreover, the red and green lines denote a negative inhibitory and a positive excitatory respectively, depending on the weights connecting the nodes. Obviously, the thickness represents the intensity of link. For the learning phase, the network is parameterized on a sufficient large number of targets given by the previous Monte Carlo LSM estimation of CAEO, summarized in Tables 4 and 5. The idea is to use the Monte Carlo values, whose time simulation is very long, as targets in the training phase in order to extend, with the BP neural network, the value for any input vector. This approach allows a drastrical reduction of time simulation. About the Monte Carlo approach, we have used as number of discretizations x ¼ 100, the number of American simulation m ¼ 50;000 and the paths of Compound option n ¼ 30;000. We recall that for each path j ¼ 1; . . .; n there are m ¼ 50;000 trajectories to simulate the American exchange option. This leads to increase time simulation of CAEO instead of a better accuracy. We propose a logistic activation function between the input-hidden layers. In this scenario, its property as approximator is well defined (see White 1990). In our model, we have assumed one hidden layer with six nodes and one output layer, i.e. the neural value of CAEO, with a pureline activation function. Each node performs computation and transformation operations. In particular, in the hidden layer, the aggregation function used is the sum function as: where j ¼ 1; . . .; 6 are the nodes in hidden layer, x i are the input values for i ¼ 1; . . .; 10 and b j is a threshold value named bias, (for more detail see Hecht-Nielsen 1990). As seen in Fig. 2, a feed forward neural network model including a single hidden layer, which takes inputs from the input layer and produces the weighted sum of inputs added onto some bias values as outputs, is preferred to solve the problem effectively. The output produced by each node of hidden layer is obtained by the logistic activation function: and so this output become input for the output layer. In the same fashion: is the output that the network produces at the end of the first cycle of learning, in which g is the pureline activation function, w 0 j and b 0 j are the weights and the bias, respectively. Moreover, the BP networks are a learning algorithm based on conventional method of reduction gradient, in which the couples of input-output are introduced iteratively in the network by an opportune update and modification of weights in order to reach the minimum value of squared error (MSE) function: where K is the number of input-output data with LSM simulation, y k is the real output (target) associated with vector input k and y 0 k is the neural value. For the numerical solution of the minimization problem defined above, the ''Gradient Descent Method'' is considered. Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using a gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. In particular, the updating of the weights is obtained by retropropagating appropriately the value assumed by the function E from the output layer, through the hidden layer, up to the initial state, as: w 0 j ðq þ 1Þ ¼w 0 j ðqÞ À g where w(q) is the value of weights at q iteration and g is a learning rate that we assume g ¼ 0:60. Moreover, the choice of the learning rate is important, as descent parameter g, and it plays a vital role for converging of the algorithm to the solution in gradient descent. A lower g value causes a long running time for the algorithm, which becomes computationally expensive. In contrast, large g values imply divergence from the solution in general. Other Methods: Radial Basis Function (RBF) and General Regression Neural Network (GRNN) As we have analyzed, BP network is one of the most widely used neural networks. It is a multi-layer network which includes at least one hidden layer. First the input is propagated forward through the network to get the response of the output layer. Then, the sensitivities are propagated backward to reduce the error. During this process, weights in all hidden layers are modified. As the propagation continues, the weights are continuously adjusted and the precision of the output is improved. Radial Basis Function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The RBF network is a three-layer feed-forward neural network, between the input and the output layers there is a hidden layer. When training, vectors are input to the first layer and fanned out to the hidden layer. In the latter, a cluster of RBF functions turn the input to output, adjusting the weight of the input to the hidden layer. Then, under the target vector's supervising, the weight of the output vector of the hidden layer is adjusted. When clustering texts, the Euclidean distance between the input vectors and the weight vectors, which have been adjusted by training process, is calculated. Each input sample is sorted to a class. Then the output layer collects samples belonging to same classes and organizes an output vector, the final clustering. The most common form of basis function is the Gaussian. By contrast, the hidden units of the RBF network are formed by the distance between the prototype vector and the input vector, transformed by a non-linear basis function. The basic structure of an RBF neural network includes an n dimension input layer, a fairly larger dimension hidden layer (p [ n) and the output layer. Typical radial basis function is Gaussian function, where p is the number of number of neurons in the hidden layer, jj Á jj is the Euclidean norm, c j and r j are the center and width of the hidden neuron j, respectively and a j is given by formula (19). The output is given by the following linear transformation: Generalized Regression Neural Network (GRNN), suggested by Specht (1991), belongs to RBF networks with the assumption that the number of neurons in the hidden layer is the same as the sample size of training data and the center of the i-th neuron is just the i-th sample x i . We remark that GRNN directly produces a predicted value without training process. Figure 3a shows the K ¼ 100 LSM data used in training and in the particular their values have been normalized between 0 and 1. The lowest value corresponds to CAEO value equals to 1.850 while the highest one is 32.250. In the same picture, the red line depicts the average value of LSM simulation. It is interesting to analyze Fig. 3b that illustrates the evolution of standard error in the training phase over the learning cycles. It shows how the average training error goes down in order to reach the level 0.0034. To illustrate our results, we simulate by the neural network the CAEO value starting from these initial parameter value: Table 1. The neural network simulates the CAEO respecting the sensitivity of several variables. In particular, it increases when asset V and the volatilities r v and r d raise too. While, the CAEO decreases when the investment costs D and IT enlarge. The advantage to have a neural network to simulate a CAEO is, first of all, the time in order to receive the (a) (b) Fig. 3 Back Propagation training results simulated output with a low standard error. We remark that the LSM Monte Carlo is accuracy with an average standard error 0.0094 but the time simulation is very long. Another advantage about the neural network is to describe the influence that all variables produce on the CAEO value. As referred in Table 2, the most important parameter is the volatility of gross project r v , the gross project value V and the volatility r d . It is also possible to verify, as shown in Fig. 4, that most of the results are accurate. They are in fact almost all arranged on the fit line. In fact, the correlation coefcient (R) to the linear t (y ¼ ax) is 0.999 giving an almost perfect t, something of course expected since it was this data set used for the training of the network. The very good tting values indicate that the training was done very well. These results veried the success of BP neural networks to recognize the implicit relationships between input and output variables. Finally, to appreciate the goodness of BP method, we will compare it with the RBF and general regression GRNN. Some significant results are summarized in Table 3. To evaluate the goodness of the network, the MSE (see Eq. 20) and the Mean Absolute Percentage Error (MAPE) defined as: Numerical Results y 0 k À y k y k are proposed. As we can see from the results in Table 3, the RBF and the GRNN seem to underestimate the CAEO value with respect to BP. However, the three methods analyzed have a good predictive power since, even if the MSE and MAPE are slightly higher in RBF and GRNN than BP. It is evident that the BP network provides much better predictions than the other types of neural networks. As regards the latter, however, it is difficult to establish which one has the best behavior since the accuracy of the predictions is fairly uniform. However, what we can say is that the GRNN type network is the one that behaves in the worst way. Conclusions In this paper, we have shown how neural network methodology, joint with the LSM, can be used to evaluate R&D projects. In particular way, an R&D opportunity is a sequential investment and therefore can be considered as a compound option. We have assumed the managerial flexibility to realize the production investment D before the maturity T in order to benefit of R&D cash flows. So an R&D project can be view as a Compound American Exchange option (CAEO), that allows us to couple both the sequential frame and the managerial flexibility of an R&D investment. We have analyzed two main contributions. The first is that the LSM method permits to determine the expected continuation value by regressing the discounted cash flows on the simple powers of variable P, and so to overcome the effort to compute the critical prices P à s k ; k ¼ 1. . .h À 2. But this approach requires long time in order to value a CAEO. The second contribution analyzed is the construction of a neural network based on BP architecture using the LSM simulation results as ''targets'' in the learning phase. As there is not a market evaluation of CAEO, we have seen that the advantage of this approach is the speed and the accuracy in the computations and, moreover, the possibility to extend the trained neural network to value any R&D investment project. Finally, we have compared the BP results with those obtained from the RBF and GRNN neural approach. Based on MSE and MAPE, the BP provides much better predictions. Appendix: Matlal algorithm In this appendix we present first of all the Matlab algorithm of LSM method:
6,050.8
2021-07-14T00:00:00.000
[ "Business", "Economics" ]
Extending human perception of electromagnetic radiation to the UV region through biologically inspired photochromic fuzzy logic (BIPFUL) systems † Photochromic fuzzy logic systems have been designed that extend human visual perception into the UV region. The systems are founded on a detailed knowledge of the activation wavelengths and quantum yields of a series of thermally reversible photochromic compounds. By appropriate matching of the photochromic behaviour unique colour signatures are generated in response differing UV activation frequencies. A deep scientific understanding of how the human brain perceives, thinks, and acts will have a revolutionary impact in science, medicine, economic growth, security and well-being as recently expounded by an interdisciplinary and international team lead by Albus. 1 Perception plays a relevant role within the cognitive architecture of the human nervous system. In fact, it builds and maintains an internal model of the external world and conditions behavior. 2 Among the different sensory systems, we have at our disposal, the visual system is remarkable because it allows us to discern colour, shape and the movement of objects. 3 Human colour perception is founded upon a mosaic of many replicas of three types of photosensitive cells, termed cones. Each type of cone absorbs a particular region of the visible spectrum, although their spectra partly overlap. There is a type of cone that absorbs mainly blue light (with an absorbance peak at 420 nm), another that absorbs green wavelengths (with an absorbance peak at 530 nm), and a third type that absorbs up into the red region (with an absorbance peak at 565 nm). When light, having a particular spectral power distribution, impinges on the retina, it activates each of the three types of cones by differing degrees. The distribution of the degrees of activation of the three types of cones is information that travels as electrochemical signals up to the visual cortex. In the visual cortex, information is encoded as a specific pattern of activity A deep scientific understanding of how the human brain perceives, thinks, and acts will have a revolutionary impact in science, medicine, economic growth, security and well-being as recently expounded by an interdisciplinary and international team lead by Albus. 1 Perception plays a relevant role within the cognitive architecture of the human nervous system. In fact, it builds and maintains an internal model of the external world and conditions behavior. 2 Among the different sensory systems, we have at our disposal, the visual system is remarkable because it allows us to discern colour, shape and the movement of objects. 3 Human colour perception is founded upon a mosaic of many replicas of three types of photosensitive cells, termed cones. Each type of cone absorbs a particular region of the visible spectrum, although their spectra partly overlap. There is a type of cone that absorbs mainly blue light (with an absorbance peak at 420 nm), another that absorbs green wavelengths (with an absorbance peak at 530 nm), and a third type that absorbs up into the red region (with an absorbance peak at 565 nm). When light, having a particular spectral power distribution, impinges on the retina, it activates each of the three types of cones by differing degrees. The distribution of the degrees of activation of the three types of cones is information that travels as electrochemical signals up to the visual cortex. In the visual cortex, information is encoded as a specific pattern of activity of the cortical neurons in layer 4 of the V1 area. 4-7 Metameric matches 8 occur when different spectral signals lead to the same activation patterns in the three types of cones and to the same pattern of activity in the visual cortex; thus, the different spectral signals appear to represent the same colour. Of course, the overall information about the colours within our brain is not limited to a simple correspondence between pigment activation and the spectrum of light. In fact, neuro-physiological evidence, such as colour constancy and coloured shadows, reveal the existence of post-receptor mechanisms for colour information processing. 9,10 As far as the receptor mechanisms of colour perception are concerned, these mechanisms can be modelled by invoking the theory of fuzzy logic. 11,12 Fuzzy logic is a mathematically rigorous model useful to describe the human computational ability which presently uses words and imprecise reasoning. 13 It is based on the theory of fuzzy sets proposed by Zadeh 50 years ago. 14 A fuzzy set is more than a ''classical'' set because it can not only wholly include or wholly exclude elements, but it can also partially include and exclude other elements. A fuzzy set breaks the law of excluded middle, because an element may belong to both a fuzzy set and its complement. The degree of membership (m) of an element to a fuzzy set can be any real number included between 0 and 1. Fuzzy logic describes any non-linear complex input-output relations after building a fuzzy logic system. A fuzzy logic system consists of a collection of input fuzzy sets, a collection of output fuzzy sets, and a fuzzy inference engine that links, through syllogistic statements of the type ''if. . ., then. . .'', each input fuzzy set to a particular output fuzzy set. The cones on the retina behave like input fuzzy sets: a beam of light impinging upon the retina belongs to the cellular fuzzy sets to different degrees, depending on its spectral power distribution. On the other hand, output fuzzy sets are made of clusters of patterns of activity of cortical neurons that are interpreted as the same colour. The fuzzy inference engine is the mechanism of transduction of the electrochemical information stored by the photoreceptor cells in the information encoded as patterns of activity of cortical neurons. This description is useful for the design of biologically inspired chemical systems for UV vision. Humans cannot perceive UV radiation because the lens and cornea of the eye absorb strongly in this wavelength region preventing UV radiation from reaching the retina. However, a wide variety of animal species show sensitivity to UV, ranging from insects to mammals. Most often, the species that see UV are provided with a specific photoreceptor peaked around 350 nm in the UV-A region. 15,16 UV sensitivity is useful in activities as diverse as navigation, intra-and inter-species communication, foraging and circadian synchronization. A remarkable case is the mantis shrimp that has at least four types of photoreceptors for UV in addition to eight for the visible region, as befits its habitat of kaleidoscopically colourful tropical coral reefs. 17 In this work, we present the synthesis, study and optimized combinations of five thermally reversible photochromic compounds (1-5) that generate biologically inspired fuzzy logic systems useful to transform the frequencies of the UV spectrum, invisible to us, into specific colours perceptible to the human eye. The photochromic compounds are 1, (5). Their structures and the colours of their solutions containing their ring-opened forms after UV irradiation are shown in Fig. 1. The absorption spectra of the closed uncoloured forms are depicted in Fig. 2A. Naphthoxazine 1 has the largest absorption coefficient in the UV-A (320-400 nm) and almost in the entire UV-B region (280-320 nm). In the portion of UV-C included between 250 and 280 nm that will be considered in this work, the absorption of 1 is overwhelmed by the contributions of the other four naphthopyrans. Naphtho[2,1-b]pyrans 3 and 4 have fairly similar absorption spectra in the UV-A; in UV-B and UV-C regions, 3 absorbs more than 4 due to the presence of a morpholino group bound to one of the two phenyl rings (see Fig. 1). Naphtho-[1,2-b]pyran 2 is characterized by small values of the absorption coefficient in the range 387-294 nm. Compound 5 is the naphtho[2,1-b]pyran that commences absorption at the shortest wavelengths among the other photochromes. Upon UV irradiation (see Fig. 2B), 1 gives rise to a merocyanine that has an absorption band with a maximum at 610 nm and its solution becomes blue; 5 generates a band having a maximum at 554 nm and its solution becomes purple; 2 produces a band in the visible region which peaks at 497 nm and its solution becomes pink; 3 gives rise to a band centred at 463 nm and its solution becomes orange; finally, 4 generates a narrow band with a maximum at 413 nm and its solution appears yellow. The band due to 1 has an absorption coefficient at 610 nm that is more than four times larger than the values of the other coloured species; it is generally accepted that photochromic oxazines typically afford ring-opened species, which are more hyperchromic and bathochromic than those derived from diarylnaphthopyrans. [18][19][20][21] The photochemical quantum yields (F PC ) of the five photochromic compounds have been determined by irradiating at different wavelengths in the UV. The experimental methodology followed for their determination is described in the ESI. † The results are listed in Table 1. For all compounds, F PC in the UV-C region is larger than F PC in UV-A and UV-B regions. This is particularly true in the case of naphthopyrans. In fact, naphthopyrans are known to give ultrafast electrocyclic ring opening reactions that kinetically compete with the other unreactive relaxation pathways and their photochemical quantum yields are usually wavelength-dependent. 22 Simulations of the absorption spectra by density functional theory computation (DFT, see ESI †) reveal that the electronic transitions in UV-A Fig. 1 Structures of the five closed uncoloured (1c-5c) photochromic compounds and their ring-opened coloured forms (1o-5o). The pictures show the colours they produce in MeCN solutions upon UV irradiation. and UV-B involve mainly the naphthopyran rings, whereas those in UV-C have charge transfer character from the naphthopyran rings to the two phenyl groups bound to the sp 3 carbon atom in the pyran ring. The open forms produced by irradiating with UV have lifetimes of tens of seconds, which are independent of the frequency of irradiation (see data in Table 1). In particular, the open form of 5 has the shortest lifetime, lasting 19 s, whereas the open form of 4 is approximately three times more persistent. With knowledge of the spectral and photochemical properties of the five photochromes, some or all of them can be mixed in different ratios to create chemical systems able to transform the frequencies of the UV-A, UV-B and UV-C regions into different colours. The matching criteria are founded upon two considerations. First, the absorption bands of the uncoloured forms must be conceived as input fuzzy sets and the irradiation intensity I 0 (l irr ) at l irr will belong to each of them with a degree (m UV,i ) given by: In eqn (1), 12,23 I abs,i (l irr ) = I 0 (l irr )(1-10 ÀeUV,il0,i ) is the intensity absorbed by the uncoloured form of the i-th species whose absorption coefficient is e Un,i and whose analytical concentration is C 0,i . Second, the bands in the visible region produced by the open forms behave as output fuzzy sets. The contribution in absorbance of each coloured species at the wavelength l an belonging to the visible region will be: where k D,i is the reciprocal of the lifetime of the i-th open form. The final absorption spectrum recorded at the photo-stationary state will be the sum of as many terms represented by eqn (2) as there are photochromic components present in the mixture. Of course, the sum must be extended to all the wavelengths (l an ) belonging to the visible spectrum. Many combinations of photochromes 1-5, containing from three to five compounds and selected by applying eqn (1) and (2), have been found effective in distinguishing the three principal UV regions: UV-A (400-320 nm) from UV-B (320-280 nm) and from UV-C with l Z 250 nm. One of the best systems was a quaternary mixture involving 1, 4, 5, and 2 in concentrations of 5.2 Â 10 À5 M, 7.38 Â 10 À5 M, 1.4 Â 10 À4 M, and 1.4 Â 10 À4 M, respectively. Its discriminative power is shown in Fig. 3. When the system is irradiated by frequencies belonging to UV-A, the solution becomes green; under UV-B, the solution turns to grey, and under UV-C irradiation with wavelengths longer than 250 nm, it becomes orange. The spectra recorded at the photo-stationary states, shown as the grey dashed traces in the bottom panels of Fig. 3, are accurately reproducible by summing the spectral contribution of each species, expressed by eqn (2) (see the red dashed traces in the same panels). Eqn (2) provides a powerful means to predict the observed colour when the mixture is irradiated by many UV frequencies, simultaneously. For example, in Fig. 4, the experimental (grey dashed traces) and predicted (red dashed traces) spectra obtained under different polychromatic irradiation frequencies are compared. Under direct sunlight (see top-left panel), the solution becomes orange, and the experimental spectrum can be readily reproduced if we consider that the closed forms of 1 and 4 are completely transformed into their open forms by UV-A, whereas 2 and 5 are completely converted into their coloured forms by UV-B. When we add the contribution of the radiation at 254 nm, emitted by the Hg lamp (see plot B1 in Fig. 4), the solution turns red because of the higher F PC of 2 and 5 in the Fig. 3 Response of the quaternary photochromic fuzzy logic system to electromagnetic radiation belonging to the UV-A, UV-B and UV-C regions, respectively. Continuous traces represent the calculated spectral contribution of 1 (black), 4 (green), 2 (magenta), 5 (blue); their algebraic sum gives the red dashed traces. The grey dashed traces represent the spectra recorded experimentally using 125 W Xe lamp as irradiation source. In the three plots, DA is obtained by subtracting the spectrum recorded at the photo-stationary state from the initial one; (A/I 0 ) represents the total absorbance into the visible calculated by eqn (2), divided by the total intensity (I 0 ) at l irr . Fig. 4 Response of the quaternary photochromic fuzzy logic system to electromagnetic radiation belonging to different regions of UV. In each panel, the left-hand side plot represents the spectrum of the irradiation source, whereas the plot on the right shows the experimental spectrum (grey dashed trace) compared with the calculated spectrum (red dashed trace) obtained by summing the contributions of 1 (black continuous trace), 4 (green continuous trace), 2 (magenta continuous trace) and 5 (blue continuous trace). On the right-hand plot of each panel there is also a picture of the irradiated solution (inset). The top-left panel is relative to direct sunlight at 11 a.m.; the top-right panel is relative to direct sunlight at 11 a.m. plus irradiation at 254 nm emitted by a Hg lamp (inset); the bottomleft-hand panel is relative to skylight and the bottom-right panel is relative to skylight plus irradiation at 254 nm emitted by a Hg lamp (inset). UV-C region. When the solution is irradiated by skylight but not direct sunlight, it assumes a pale green colour because the spectrum of skylight is deficient in UV-B (see plot C1 in Fig. 4). When we add the 254 nm wavelength emitted by the Hg lamp to the spectrum of skylight, the colour of the solution turns red because the very intense UV-C radiation quantitatively transforms the closed forms of 2 and 5 to their respective open structures. In fact, 2 and 5 have higher probabilities of absorbing the 254 nm radiation because they are at higher concentration in the mixture. The performance of the Biologically Inspired Photochromic Fuzzy Logic (BIPFUL) systems investigated in acetonitrile solutions can be extended to a solid cellulose support as in white paper. After soaking a sheet of filter paper in an acetonitrile solution of the quaternary mixture described above, and after drying, the impregnated paper becomes photochromic and UV selective. It is possible to write on the paper by using UV radiation, and it is possible to change colour by appropriate tuning of the frequency of the irradiation source. For example, Fig. 5 shows a green A, a grey B and an orange C written on sheets of paper by irradiating them through negative masks and with radiation belonging to UV-A, UV-B or UV-C, respectively. If the UV source is turned off, the letters slowly (in roughly 30 minutes) disappear. This work has demonstrated for the first time that solutions of carefully matched thermoreversible photochromic compounds are chemical systems having the emergent property of discriminating the three UV regions of the electromagnetic spectrum. These results contribute to the development of Systems Chemistry 24 and Chemical Artificial Intelligence. 25,26 In fact, the BIPFUL systems described herein mimic the computing power of vertebrates and invertebrates that have different photoreceptors to distinguish between frequencies of the electromagnetic spectrum. Recently, de Silva et al., 27 demonstrated that the parallel processing by the combination of a pH sensor and a photo-acid generator detects the edges of objects, which is a rather complex computational task normally requiring a highly organized biomolecular system. This paper constitutes a further demonstration of how simple molecular logic systems, such as the photochromic compounds defined as ''optical transistors'', [28][29][30][31] can give rise to high-level computing performances. We have shown that our BIPFUL systems can work not only in solution but also on a solid cellulosic support such as a sheet of inexpensive filter paper. Therefore, these photochromic systems are promising for designing new devices distinguishing UV frequencies in a photochemical manner instead of photo-electrically as with the use of semiconductors. 32 Such photochemical UV detectors can be directly linked to human vision through the colours they produce and can supplement the performance of our visual system, enabling a further degree of detection and discrimination. P. L. Gentili acknowledges the financial support by the University of Perugia (Fondo Ricerca di Base 2014, D. D. n. 170, 23/12/ 2014). A. L. Rightler thanks the American Chemical Society IREU Program and the National Science Foundation for support under award number DMR-1262908. The EPSRC are thanked for provision of a mass spectrometry service at the University of Swansea. Notes and references Fig. 5 Images of three sheets of white filter paper impregnated with optimal photochromic dye mixture and irradiated by UV-A (A), UV-B (B) (emitted by a 125 W Xe lamp) and UV-C (C) (emitted by a Hg lamp) radiation through negative masks reproducing the letters (A, B and C), respectively.
4,284
2016-01-14T00:00:00.000
[ "Chemistry", "Physics" ]
Computer Vision Classification of Barley Flour Based on Spatial Pyramid Partition Ensemble Imaging sensors are largely employed in the food processing industry for quality control. Flour from malting barley varieties is a valuable ingredient in the food industry, but its use is restricted due to quality aspects such as color variations and the presence of husk fragments. On the other hand, naked varieties present superior quality with better visual appearance and nutritional composition for human consumption. Computer Vision Systems (CVS) can provide an automatic and precise classification of samples, but identification of grain and flour characteristics require more specialized methods. In this paper, we propose CVS combined with the Spatial Pyramid Partition ensemble (SPPe) technique to distinguish between naked and malting types of twenty-two flour varieties using image features and machine learning. SPPe leverages the analysis of patterns from different spatial regions, providing more reliable classification. Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), J48 decision tree, and Random Forest (RF) were compared for samples’ classification. Machine learning algorithms embedded in the CVS were induced based on 55 image features. The results ranged from 75.00% (k-NN) to 100.00% (J48) accuracy, showing that sample assessment by CVS with SPPe was highly accurate, representing a potential technique for automatic barley flour classification. Introduction Barley is one of the most ancient cereal crops grown by humanity [1]. Over the years, some barley cultivars (e.g. malting or hulled barley) were selected for the malt and brewery industry, while other cultivars were selected to be used as food ingredients. These last cultivars are known as naked, or even hull-less or uncovered barley, generally containing higher amounts of soluble fiber [2,3]. Requirements concerning barley characteristics are quite different for malting and food industries. For brewery, grains with a low β-glucan concentration and barley kernels with a tough inedible outer hull still attached are required. High β-glucan levels interfere negatively in the malting filtration process. Furthermore, the loss of husks during malting processes leads to a reduction in malt quality. Such characteristics are inherent in hulled varieties [4]. On the other hand, barley cultivars with high levels of proteins and β-glucan (a functional ingredient) are preferred in the food industry, and some further specifications may vary depending on the requirements of each product. As an example, flours from naked types are preferably used for infant foods because they generally have fewer husk fragments [5]. Due to vast applicability, barley is one of the four significant grains, being used for various organic food materials [6,7]. Despite the genetic resource of a variety being the significant factor in determining its technological characteristics, it is well established that environmental conditions and interactions between environment and genotype can modify the expression of such characteristics [8]. Consequently, it is difficult to predict the best industrial destination for barley, or other cereal grains, without performing some physical and chemical analysis, which are generally expensive, time-consuming, and/or require specialized analysts and equipment [9]. The agricultural and food industries are often searching for fast and accurate technologies to increase processing performance, improving product quality. Imaging sensors and computer vision systems have been developed for grading product quality, discriminating among varieties, and detecting contaminants or added substances [10][11][12]. Quality evaluation can be performed by a Computer Vision System (CVS) based on an acquisition device (digital camera, inexpensive and broadly available) and prediction models using machine learning algorithms. This type of approach presents several advantages, including rapidity, low cost, and accuracy and can be applied to grains/seeds [13,14], flours [11], or other agricultural by-products. Being non-invasive methods and not employing chemical reagents, they can be considered as eco-friendly technologies. Product inspection is in high demand in the food industry, including quality inspection, process control, classification, and grading. Manual inspection by visual examination demands a long time and is tedious and inefficient. Machine vision is suitable for this task, as computer vision provides an economical and fast alternative for food processing inspection [15]. The visual aspect is one of the most important parameters for the assessment of food quality. The general utilization of processing equipment in the industry has increased the risk of foreign material contamination [16]. Adulteration, contamination, or simply grading of products according to their visual characteristics are a common need in food processing. For instance, due to the resulting potential health threat to consumers, the development of a fast, label-free, and non-invasive technique for the detection of adulteration over a wide range of food products is necessary [17]. Hence, the food industry is interested in optimizing not only the nutritional characteristics of food products, but also their appearance, including color, texture, etc. It is essential to investigate objective methods that can quantify the visual aspects of food products [18]. To meet the demand for high-quality produce, grains are classified according to their characteristics, before being sent for processing. Manual inspection of in-process products is difficult considering the sampling from processing lines [15]. Considering that barley grains are inhomogeneous, imaging techniques will have extensive practical applicability as analytical tools during industrial processing. Regarding all the chemical-free techniques available, there are still some common challenges before transferring recent research achievements obtained from a laboratory scale to industrial applications, such as building innovative data analysis algorithms that can thoroughly filter redundant information; exploiting appropriate statistical techniques for improving the model robustness for real-time operations; and decreasing the cost of the instrument [19]. In digital image analysis, spatial pyramid methods are very popular for preserving the spatial information of local features, focusing on improving the pattern description [20]. Sharma et al. [21] proposed Spatial Pyramid Partition (SPP) and highlighted that in many visual classification tasks, the spatial distribution carries important information for a given visual classification task. However, the proposed SPP method based on bag of features leads to enlarging the feature vector when several image descriptors take place, resulting in a highly dimensional problem and demanding feature extraction or feature selection methods. Szczypinski et al. [9] classified barley grain varieties based on image-derived shape, color, and texture attributes of individual kernels. Considering barley flour classification, spatial information is required, but the original SPP is not feasible in our problem due to the characteristics of bag of features. In other words, barley grain and flour image-based classification demand several features, while flour image analysis requires more robust pattern recognition approaches than SPP can provide. However, the increase of dimensionality arising from the application of SPP is a challenge in the machine learning scenario, and consequently for CVS applications. Therefore, we proposed the Spatial Pyramid Partition ensemble (SPPe), an ensemble technique fashioned on SPP towards supporting a suitable image pattern description in scenarios with a considerable number of features, as exposed in Figure 1. A vast number of characteristics might rely on performance improvement of prediction tasks. The traditional feature extraction method considers the whole image at once for extracting its features, which possibly decreases important spatial information of some image descriptors from samples. As previously mentioned, the SPP was proposed to improve the problem task that requires some localized descriptors. However, it is based on a bag of features grounded on splitting images into sub-regions for supporting additional spatial information. Thus, SPP appraises a visual descriptor vector composed by the original image and its sub-regions from each sample. The proposed SPPe was evaluated in a CVS with a set of image features based on color, intensity, and texture, in comparison to SPP [21], directly using the features extracted from the Region Of Interest (ROI), as traditional CVS [13,[22][23][24]. We compared the performance of four different machine learning algorithms: Random Forest (RF), Support Vector Machine (SVM), k Nearest Neighbor (k-NN), and J48 decision tree for modeling the classifier. These algorithms were employed to distinguish between naked and malting barley flour with image features extracted from 22 varieties acquired from five samples of each variety. Related Work Several studies presented CVS with machine learning methods applied to improve the prediction of a given parameter. Some CVS require sophisticated modeling to cope with non-linearities and noisy and imbalanced datasets. The application of Machine Learning (ML) techniques for food attributes' prediction and quality evaluation has been widely investigated [10,22,[25][26][27][28][29][30]. ML can be applied to extract non-trivial relationships automatically from a training dataset, producing a generalization of knowledge for further predictions [31]. Hence, machine learning promotes high performance as an alternative for an intensive agricultural operational process of the agri-technologies domain [32]. Random Forest (RF) [33], Support Vector Machine (SVM) [34], k-Nearest Neighbors (k-NN) [35], and the J48 decision tree algorithm [36] are well-established machine learning algorithms applied in many studies related to food quality analyses. RF was compared to SVM for an automated marbling grading system of dry-cured ham [37]. The SVM algorithm showed better performance with 89% of the samples correctly classified. Another application of SVM was described in Papadopulou et al. [27], achieving over 89% of accuracy for classification of beef fillets according to quality grades. For analyzing image features to evaluate the impact of diets on live fish skin, Saberioon et al. [38] applied four different classification methods, and SVM provided the best classifier with 82% of accuracy. Barbon et al. [23] proposed a CVS for meat classification based on image features, managed by an instance-based system using k-NN to classify meat according to marbling scores from image features. The authors presented an accuracy of 81.59% for bovine and 76.14% for swine samples, using only three samples for each marbling score by the k-NN prediction models. Granitto et al. [30] applied RF for the discrimination of six different Italian cheeses. In addition to reasonable accuracy, the RF model provided an estimation of the relative importance of each sensory attribute involved. The effectiveness of RF was also highlighted in a CVS used for predicting the ripening of papaya from digital imaging [22]. Considering barley applications, Nowakowski et al. [39] evaluated malt barley seeds using four barley varieties. The feasibility of image analysis was applied with machine learning and morphology and color features, achieving 99% accuracy. Kociolek et al. [40] classified barley grain defects using preprocessed kernel image pairs for feature extraction based on morphological operations. Pazoki et al. [41] identified cultivars using rain-fed barley seeds. The proposed method was applied with 22 features extracted from three varieties of samples, which fed a Multilayer Perceptron (MLP). The features of color, morphology, and shape were used for individual rain-fed barley seeds. Different network architectures were explored, including feature selection, resulting in 82.22% accuracy. Ciesielski and Nguyen [42] proposed to distinguish three different classes of bulk malt (made by barley grains). Image texture features were extracted and classification was performed with k-nearest-neighbor (k-NN), achieving an accuracy of 77.00%. According to the authors, the classification through the evaluation of individual kernels is time-consuming, and many kernels are required to obtain a significant estimation of the modification index from a whole batch. Nevertheless, separating the samples in minimal milling portions is a booster alternative, aiding the evaluation of the difference between barley types. Lim et al. [7] explored Near Infrared Spectroscopy (NIRS) and a PLS-DA discrimination model to predict hulled barley, naked barley, and wheat contaminated with Fusarium. The authors achieved high accuracy at the cost of the complexity of NIRS equipment and signal processing. Accordingly, the above studies have performed image analysis at different stages for varieties' identification for industrialization and improvement purposes. Integrating the industrial environment promotes a major role for developing an automated system for distinguishing agricultural raw-material products. The approach introduced in this paper is a CVS with an adaptation of the original SPP, modifying the overview perspective of sub-images that compose an original sample. The proposed approach is based on splitting each image into several sub-regions to predict a respective sample. We propose a method to improve prediction performance using CVS with machine learning, by applying the SPPe technique. Fourteen of the cultivars were identified as malting barley, and eight were naked barley. Letters are followed by numbers in order to indicate differences from each specific barley variety (Table 1). Computer Vision System The CVS was constructed to classify samples as malting or naked barley, through the analysis of barley flour images. The employed CVS can be detailed as four main steps: acquisition, preprocessing, feature extraction, and classification ( Figure 2). It is important to highlight that the proposed SPPe is a technique to improve the classification performance grounded on a more informative strategy from the image sample before image feature extraction. SPPe requires interactive production of sub-images from an original sample image. These new sub-images had features extracted for enriching the dataset with complementary sources of information. Prediction of the original sample was based on a voting process for the sub-image samples' classification, as detailed in Section 3.2. Image Acquisition and Preprocessing one of the most widely-used methods for image segmentation. Since this image thresholding may lead to the removal of some pixels of the ROI, all the holes in the barley flour area were filled using a connectivity approach. At this point, the obtained image mask (representing the foreground) was used to find the center of mass of the object (barley flour samples). As the final step, the center of the mask found was used to grow a predefined square until reaching the object edge. The square mask was applied on to the original image, cropping the ROI. Spatial Pyramid Partition Ensemble In the current work, we propose SPPe as part of the preprocessing step ( Figure 2), to obtain a complete pattern comprehension of each sample. Our technique is a modification of the traditional SPP proposed in Sharma et al. [21]. Spatial Pyramid Partition (SPP) is based on splitting each image into a sequence of smaller sub-regions, extracting local image features from each image, and encoding their features into a vector [43,44]. In this sense, a given image is viewed as its low-level visual features extracted from all sub-regions. Each image is split into three levels, Level 0 being the image of the ROI by removing edges; Level 1 subdivides the ROI into four distinct parts, extracting its features; Level 2 subdivides each of the previous partitions into four other partitions, totaling 21 images from each ROI for extracting features to fine-tune the dataset. As a result, high-level and low-level features are extracted from the SPP image sequence to compose the image feature vector [21]. The proposed SPPe adapted the original SPP using an ensemble strategy to obtain the image classification. As opposed to traditional SPP, the aggregated image feature vector was not comprised of all sub-region, as a bag of features. Figure 4 presents an overview of the SPPe approach of creating sub-regions. Considering the description of image splitting, a new dataset was formed, which was composed of the sub-regions designated as Level 1 and 2. Thus, a feature vector was built from each sub-region without concatenating all regions. The ensemble strategy was applied to modify the dataset samples made up of smaller regions ( Figure 4). Therefore, the sub-regions were used for problem modeling. After the prediction of a given sample from each sub-region, the scheme applied a weighing vote. In other words, we employed SPP with a subdivision strategy, to classify the Level 0 samples, and we considered each image separately for classification. Following the sub-regions' prediction, we aggregated them with the respective sample to analyze as Level 0. In this way, from a new sample image, each sub-region obtained from SPPe was classified, and the final decision was achieved by a voting step. A single model was induced for predicting all sub-regions from different levels. The induction of the classification model was carried out in the Leave-One-Subject-Out (LOSO) scheme to avoid bias [45]. The method employed the LOSO procedure to bind the sub-regions and their image Level 0, keeping all of them together in the training or test phase. In other words, each sub-image was bound to the respective sample (Level 0) and received its label. Hence, the sub-regions were considered non-independent regions as part of the same sample. This methodology guarantees the model learns nothing about the subject to be predicted. Thereby, the technique to be applied decreases the learning bias, achieving accurate results. The SPPe output is based on the relation between the number of correct and incorrect sub-regions classified toward a majority decision as an ensemble prediction. Each level of partition by the SPPe method was assigned a voting weight. In the proposed experiment, for Level 1, it was assigned a weight of 1/3 and for Level 2, 1/12 for each ensemble member (image prediction). At the end of the iterations, the final result was computed considering each vote multiplied by the assigned weight. The final classification was obtained as the majority weighted vote from 20 sub-regions (4 from Level 1 and 16 from Level 2). This procedure creates a more reliable source of image classification by reducing overfitting, providing a robust description of barley based on several regions and dimensions. We performed the original SPP proposed by Sharma et al. [21] in order to compare the SPPe performance improvements. SPPe avoids the high dimensional drawback, as in our scenario, SPP demands a total of 1155 image features per sample, while SPPe maintains only 55, both using only one classification model. Another important factor is related to the presence of visual components (e.g., husks) that could lead to noisy or biased features in the image description vector. Using an ensemble technique such as SPPe, we could reduce the overfitting of the final model [33], since the visually undesired components are lost in the final decision by a minority vote. Each image obtained from the SPPe method had its features extracted independently of the level by the same descriptors for further analysis (as described in Section 3.2.1). Image Analysis and Feature Extraction Step 2 is related to the image feature extraction in a sequence of previous procedures (Figure 2). The extracted features are groups of discriminatory properties suitable to distinguish the classes between naked and malting samples. We extracted a set of 55 image features based on color, intensity, and texture. The list including all image features used in our solution is presented in Table 2. Concerning color descriptors, statistical moments from the CIE L*a*b* and HSV color spaces were used, similarly to Li et al. [43] and Campos et al. [46]. The image acquired was stored in the RGB format, where each pixel is based on three color space: R (red), G (green), and B (blue). Due to the brightness information presented in the whole color channel from RGB, a good practice is related to selecting a different color space able to isolate brightness. For this reason, the transformation of input images from RGB to CIE L*a*b and HSV was considered toward extracting color features. The CIE L*a*b* and HSV color spaces were explored in this study: L* (Lightness), a* (red-green), b* (yellow-blue), Hue (H), Saturation (S), and Value (V) color channels, respectively. The mean and standard deviation were calculated for each color channel. Moreover, we computed the standard deviation, kurtosis, and skewness from the histogram of each channel, comprising a total of 30 color features. Likewise, the same five statistical moments were used to describe the intensity information of each image. The pixel intensity was calculated from the average of RGB values. Image entropy, which can be characterized as a statistical measure of the randomness, texture, and contrast of grey scale images, was calculated for the intensity channel [47]. Both color and intensity variations between samples can be observed in Figure 3. Therefore, those features were used to properly describe the samples, allowing the machine learning algorithms to find the correct relations between features and barley types. The texture is an important feature to identify objects or the presence of patterns in an image [48]. In this case, texture features were used to distinguish between different types of barley. For example, the presence of husk fragments in milled barley affects some features and could characterize a specific type of barley flour. Thus, having general applicability, three texture descriptors were used: Local binary patterns [49], Grey Level Co-occurrence Matrix (GLCM) [48], for which distance d = 1 and angle 0 • considering 256 grey levels, and Fast Fourier Transform (FFT), this last to uncover frequency domain characteristics [50,51]. It is important to mention that we selected some traditional image descriptors to compose our feature vector, leveraging the comparison among the approaches for barley flour classification. Nevertheless, different image classification tasks can take more advantage of SPPe by employing alternative image features, e.g., features grounded on discrete wavelet transform [52] or fractal dimension [53]. Machine Learning Features extracted from images are often used for classification and regression models, in order to identify samples from different classes or to predict quality parameters. In this way, machine learning algorithms can induce models from image features for automatic classification of barley flour. The modeling complexity of a machine learning system can vary greatly, allowing a high degree of customized freedom with appropriate trade-offs inherent in each specific scenario [54]. Some of the approaches include linear methods and non-linear machine learning algorithms, such as k-nearest neighbor, support vector machine, J48 decision tree, and random forest [46]. A brief description of the algorithms and the corresponding packages used to implement each ML algorithm are described in Table 3. In our experiments, the hyperparameters used were the default values of Rpackages in order to support a fair comparison among the algorithms. Table 3. Machine learning algorithms used in the experiments and corresponding R packages. K-Nearest Neighbor (k-NN) A non-parametric lazy learning algorithm; the training data are not used for any generalization [55]. RWeka Euclidean distance; k= 5 Decision Tree (J48) A decision tree widely applied to represent series of rules that lead to a class or value [56,57]. RWeka C = 0.25; threshold = 0.25; with pruning Random Forest (RF) A combination of decision tree models that provides more accurate prediction [33,58]. RandomForest ntree = 100; mtry = 7 Support Vector Machine (SVM) A statistical learning algorithm, used for supervised ML and food quality solutions [34,59]. e1071 kernel = polynomial; γ = 0.02, degree = 3 In our experiment, the algorithms were applied in the R environment to induce models for barley flour classification. In order to achieve a reliable evaluation, two datasets were created: cross-validation and prediction test set. The cross-validation set was used to induce the models, adjusting the hyperparameters 10-fold considering 1800 images (Levels 1 and 2), while the prediction set was employed to test the classification performance using 400 images (Levels 1 and 2). Separation of samples into training and test sets was made in order to minimize the risks of overfitting, using the Kennard-Stone algorithm [60]. It is important to mention that the samples were split into the training and testing set considering Level 0 (a group of sub-regions), 90 samples (81.8%) for training and 20 samples (18.2%) for testing. Evaluation Metrics Performance evaluation of the models from machine learning was done using the total accuracy method (accuracy matrix) [61]. It is computed through the confusion matrix, which is defined by Equation (1). The total accuracy is calculated by the sum of the main diagonal values from the confusion matrix. These values are the True Positive (TP) and True Negative (TN), which are divided by the sum of the values from the whole matrix (n). Thus, it is possible to compute the performance of the image features and machine learning algorithms through the relation of the correctly-classified samples of barley flour. Recall (Equation (2)) and precision (Equation (3)) are often used to evaluate the effectiveness of classification methods based on False Negatives (FN) and False Positives (FP). In our work, we employed these metrics in order to support a fair comparison of the methods' quality. Additionally, processing time from feature extraction to prediction was compared. Thus, it is possible to estimate overall job execution with an additional perspective of performance analysis. In the experiments, the time cost was calculated as the average of 30 runs. Dealing with descriptors, random forest importance was applied to this approach. The RF algorithm estimates the importance of a variable being observed when the prediction error increases if data for that variable are permuted, while all others are left unchanged. Based on the trees, as the random forest is constructed, RF's importance investigates each extracted image feature, measuring the impact of characteristic samples in order to predict them [33]. In order to evaluate features extracted from barley flour samples, the exposed metric of variable importance demonstrates the advantage of random forest permutation because it embraces the impact of each predictor variable individually, as well as in multivariate interactions with other predictor variables. Algorithms and Image Processing Methods The results of algorithm performance for the classification of naked and malting barley flour revealed the advantages of the proposed SPPe method, in comparison to SPP and traditional approaches. The experiments showed distinct performance values achieved with the techniques applied to this approach using machine learning algorithms. In order to establish a practical performance testing environment, the experiments were executed with Intel R Core i7-6700 CPU 3.40 GHz 16 GB memory. Table 4 summarizes the results obtained for prediction algorithms over the datasets considering performance measures such as: accuracy, precision, recall, and average processing time. Table 4. Performance measures in the comparison of the methods and algorithms (RF, k-NN, J48 and SVM) over the cross-validation and prediction dataset. Comparing the machine learning algorithms, k-NN provided the worst performance, with accuracy values equal to or below 80.00% for prediction using all methods investigated. Concerning only the results of the traditional CVS approach, (without SPP or SPPe) for the prediction set, RF obtained superior performance, with 90.00% of accuracy and precision/recall values of 86.67%, while SVM and k-NN presented similar accuracy (80.00%). Cross-Validation Prediction The original SPP presented superior results compared to the traditional method. SVM (92.00%) and RF (91.00%) reached superior results compared to J48 (88.00%) and k-NN (70.56%) for accuracy considering the cross-validation set. For the prediction set, RF obtained superior results, similar to SVM (95.00%). The worst metrics evaluated for the prediction set using the SPP technique was k-NN (60.00%), followed by J48 (85.00%). An improvement of classification accuracy was obtained by SPPe technique with ML algorithms ( Table 4). The average performance of classification considering all machine learning algorithms was improved from 83.75% in the traditional method and original SPP to 91.25% in accuracy for prediction sets. It is important to highlight that J48 stood out with 100% accuracy, and k-NN maintained the lowest performance with 95.56% (cross-validation set) and 75.00% (prediction set). Likewise, the SPPe solution had the lowest processing time cost in comparison to SPP. Furthermore, traditional CVS provided better results than SPPe using k-NN. Considering the processing time of the applied methods, CVS spent less time, being faster than SPP and SPPe, as expected. When comparing SPP and SPPe, our proposal was faster than SPP in all experiments. It is clear that the time cost tends to enlarge when the feature vector expands, as proposed by SPP and SPPe; however, the trade-off between predictive performance and processing time suggests the SPPe as a suitable solution when the main goal is the classification performance. Evaluation of Image Features The RF importance exposes the most relevant features in prediction tasks. The importance values are summarized in Figure 5. The most important features were from color: standard deviation values of the H and b* channels histogram (hue from HSV and yellow-blue CIE L*a*b* color spaces) were the most relevant explaining features with more scores higher than 50. Several statistic values from H, B*, and a* overcame texture and intensity features, although all features presented an impact for the classification procedure. In order to characterize the types of barley flour, the mean and standard deviation of the grey-scale image, and hue HSV channel were the most discriminative image features. Moreover, the values of a* and b* channels gained higher importance, as well as the saturation. Texture features were significant for predicting the samples. Indeed, some texture features of the grey level co-occurrence matrix, and some LBP metrics were efficient at predicting variations of samples and also could be related to the granularity present in the barley flour. Figure 6 summarizes the results in which it is possible to observe some misclassified samples by comparing all performed techniques with five different repetitions of acquisition (A0, A1, A2, A3, and A4). Correctly-classified samples are presented in light blue, while dark blue shows the misclassified samples for each method. It is possible to observe that the naked class presented more misclassified samples, meaning it is more complex to predict. For some classification algorithms, it is possible to observe similar behaviors among the samples, with k-NN as the worst performance. Analyzing the misclassified samples, it was possible to identify similar patterns in both naked and malting types. Observing the accuracy error, it is possible to conclude that naked sample N07 (37% error) presented similar characteristics to malting samples. Figure 7 presents an overview of the N07 cultivar, where it is possible to observe details by comparing the five samples as previously mentioned, and highlighted in the heat map shown in Figure 6. One possible explanation for the high error rate observed is the fact that Brazilian naked varieties were developed by using malting barley genes, so the studied features can be similar to those of malting barley varieties due to genetic origins [62]. Overall, SPPe demonstrated superior prediction ability compared to other methods, in addition to reduced overfitting and decreasing the high dimensionality present on the original SPP. Differences in composition/physical characteristics between the two barley groups (from naked and malting barley) were detected by the computer vision system, and classification accuracy was improved using SPPe. SPPe in the Industry There is an expressive advance when using the SPPe technique in comparison to the traditional CVS and SPP. The best result in the prediction set referred to J48 predictive performance and with low processing time in comparison to SPP. The proposed vision system was designed for an embedded process to provide high-level information for the barley flour industry environment. The system can be implemented by three sub-division steps: • The input image (acquisition) being extracted from the camera. Images are acquired by a camera placed at the scene under inspection. • The scene has to be appropriately illuminated and arranged, which promotes suitable reception of the image properties that are necessary for image processing (feature extraction and classification). • The processing system stage consists of a computer employed for processing the acquired images, resulting in classifying as naked or malting barley flour. Combining the embedded technology with image processing, a future application in barley flour recognition types for quality control industry is possible. Our proposal is a viable solution for barley flour industrial processing, as well as similar flour food products. More specifically, our proposal contributes to the industry in different stages of production. The CVS can be used as a quality control, observing specific supplier and providing financial advantages for high-quality flour. The proposed solution can be integrated into processing lines to identify barley according to the application, i.e., whether it is destined for infant formula, health food, and the malting industry, among other industrial production. It is important to highlight that the SPPe was fashioned with a minor feature vector in comparison with the SPP technique, spending less time to process, being faster and promoting its implementation in the production line. Conclusions This work proposed a system based on ML algorithms and computer vision developed to solve the automatic data analysis. A new proposed approach of Spatial Pyramid Partition ensemble (SPPe) provided better results for classification of barley flour into two different classes when compared to Spatial Pyramid Partition (SPP) and traditional CVS. Differences in barley composition cause variation of the flour's physical characteristics, which were detected by image analysis. The proposed method showed a significant improvement, by reducing overfitting, avoiding dimensional growth, and improving classification accuracy for several machine learning algorithms. The importance of all image descriptors (color, intensity, and texture) for providing helpful information to distinguish between malting and naked barley flour samples was identified. The best model was built using the SPPe with J48 decision tree, allowing the classification of 100% of samples. The results of this study are promising, and they could allow the development of an effective model in order to expand its use in the food industry, reducing costs and improving the effectiveness of automatic quality inspection.
7,588
2019-07-01T00:00:00.000
[ "Agricultural And Food Sciences", "Computer Science" ]
Photoluminescence Properties and Fabrication of Red-Emitting LEDs based on Ca9Eu(VO4)(7) Phosphor , 9][20][21] The PL excitation spectrum of Ca 9 Eu(VO 4 ) 7 exhibits a strong absorption band between 250 and 310 nm in the UV region, attributable to charge transfer of V-O type inside VO 3− 4 groups, and some sharp lines between 350 and 500 nm due to various intra-configurational 4f-4f transitions of Eu 3+ . 18Under near UV excitation, higher lying 4f 6 levels transfer energy to the 5 D 0 level by nonradiative processes, leading to intense red emission at 614 nm due to the forced electric dipole allowed 5 D 0 → 7 F 2 transitions of the Eu 3+ ions. 22The red emission has been shown to exhibit a relatively high thermal stability with respect to other inorganic phosphors.At T = 420 K, approximately 20-25% of the luminescence intensity as measured at room temperature is lost, for excitation in the a Present address: Institute of Applied Physics, TU Bergakademie Freiberg, 09599 Freiberg, Germany.b Present address: Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology, 09599 Freiberg, Germany.c Present address: Advanced Technology Institute, Department of Electrical and Electronic Engineering, University of Surrey, Guildford GU2 7XH, United Kingdom.z E-mail<EMAIL_ADDRESS>to blue range, with no shift in the emission wavelengths, 18,22,23 suggesting it would be suitable for application in pc-WLEDs.However, the performance of Ca 9 Eu(VO 4 ) 7 phosphor integrated on a LED base has not yet been evaluated, and thus key technological aspects, such as the luminous efficacy of optical radiation (LER) and color/intensity stability under typical pc-WLED operating conditions remain unclear.For this reason, here we investigate the PL properties, such as the excitation and emission spectra and thermal stability, of Ca 9 Eu(VO 4 ) 7 phosphor under excitation in the near-UV range, as well as we evaluate the performance of a pc-LED prototype comprising a near-UV LED and Ca 9 Eu(VO 4 ) 7 phosphor. Experimental Materials synthesis.-Thesample, Ca 9 Eu(VO 4 ) 7 , was prepared by a conventional solid state synthesis route by mixing stoichiometric amounts of the starting reactants CaCO 3 , Eu 2 O 3 , and V 2 O 5 .The raw materials were pelletized under a load of 10 tons and underwent three thermal treatments in air at 350°C (5 hours), 700°C (5 hours) and 1000°C for 10 hours, with intermediate grindings.The as-sintered sample was pulverized and used as is for further studies. Powder X-ray diffraction.-PowderX-ray diffraction (PXRD) measurements were performed on an X'TRA powder diffractometer with a Cu-anode X-ray source (λ = 1.5406Å), operating in the Bragg-Brentano geometry.The PXRD pattern was collected in the 20-70°2θ-range at a scan speed of 0.03°/s.Photoluminescence spectroscopy.-ThePL excitation and emission spectra were measured at room temperature using a Fluorolog 3 spectrofluorometer from Horiba-Jobin-Yvon, equipped with a Xe lamp, a double excitation monochromator, a single emission monochromator (model HR320) and a photomultiplier tube (PMT) in photon counting mode for the detection of the emitted signal.Diffuse reflectance spectra were measured with the use of a halogen lamp. Variable temperature emission intensity measurements were performed over the temperature range T = 300-800 K, using a Linkam THMS 600 heating stage, with the sample prepared as a discshaped pellet of 5 mm in diameter by uniaxial pressing of 60 mg Ca 9 Eu(VO 4 ) 7 phosphor.The excitation was obtained using a 454 nm DeltaDiode DD-450L laser connected to a DD-C1 picosecond diode controller from Horiba Scientific.Emission spectra were recorded on an Ocean Optics USB2000+ UV-Vis spectrometer. Phosphor-converted red LED prototypes.-Red-emittingpc-LED prototypes were developed by integrating Ca 9 Eu(VO 4 ) 7 phosphor powder on near-UV LED chips as purchased from Semileds Corporation.The LED chip dimensions were 400×400 μm 2 and featured a 115 μm bond pad on the surface and an Au-plated back side.The active region consisted of InGaN epi-layers with an emission maximum at 396 nm.The LEDs were capable of delivering 20 mA at 3.8 V forward voltage with a junction temperature of 125°C.Au-plated Schott 8 pin TO 5 headers were used as LED holders.Mounting was done using In-204 soldering paste from Indium Corporation and heating the headers on a hot plate to 200°C before mechanical transfer of the LED chip to the header.Another heating to 200°C was performed with the LED chip mounted.Contacts to two of the pins were made from the LED top bond pad as well as the back side of the Au-plated chip using a K&S 4123 wedge bonder and 17 μm Au-wire. The Ca 9 Eu(VO 4 ) 7 phosphor was dispersed in Elastosil RT 601 silicone gel procured from Wacker Chemicals.Elastosil RT 601A and 601B components were mixed in a 10:1 ratio.After component mixing, 75 μl of the gel was mixed with different ratios of phosphor powder.A series of five samples with a phosphor concentration of 133, 270, 400, 530, and 670 g/l silicone gel was prepared.These gels were then mechanically transferred to the headers to coat the LED and left to solidify overnight.A reference LED, of 75 μl pure silicone gel (without phosphor), was prepared in order to measure the absorption of the silicone gel alone.For the reproducibility of results, all the 75 μl of gel + phosphor was applied to the LEDs.The 8-pin headers with the coated LEDs were mounted in an Optronic Laboratories OL 770 Multichannel spectroradiometer integrating sphere for the optical measurements.Current was supplied using a Yokogawa 7651 Programmable DC source and simultaneous voltage was measured using a Keithley 2400 SourceMeter (to monitor the maximum allowed voltage of the LEDs). Results and Discussion Powder X-ray diffraction.-Fig.1a shows the room temperature PXRD pattern of Ca 9 Eu(VO 4 ) 7 phosphor sample along with Rietveld refinement of the PXRD pattern.The PXRD pattern shows that the sample is single-phase with no detectable amounts of impurities, having a chi-square value of 2.52.In agreement with previous studies of Ca 9 Eu(VO 4 ) 7 , the PXRD pattern can be indexed to a whitlockite-type structure (space group R3c) with a hexagonal unit cell built up of VO 4 tetrahedra with the Ca/Eu ions occupying the space between the VO 4 tetrahedra, and with the Eu 3+ ions randomly distributed on four individual Ca crystallographic sites (Fig. 1b).The unit cell parameters are a = b = 10.8663Å, and c = 38.0863Å, and the unit cell volume is V = 3894.62Å 3 .These values are in agreement with earlier crystallographic studies of Ca 9 Eu(VO 4 ) 7 . 13,18otoluminescence spectroscopy.-Fig.2a shows the PL emission and excitation spectra for Ca 9 Eu(VO 4 ) 7 , as measured at T = 293 K.The excitation spectrum (in black color), as measured upon a fixed emission wavelength of 613 nm, is characterized by a broad band that extends from the lower limit of the measured spectrum (275 nm) up to about 350 nm, which is assigned as 1 A 1 → 1 T 2 and 1 A 1 → 1 T 1 charge transfer transitions inside VO 3− 4 groups.Additionally, the spectrum contains sharper peaks at approximately 380 nm ( 7 F 0 → 5 L 7 ), 395 nm ( 7 F 0 → 5 L 6 ), 414 nm ( 7 F 0 → 5 D 3 ), and 464 nm ( 7 F 0 → 5 D 2 ), respectively. 23The excitation spectrum is in good agreement with the diffuse reflectance spectrum, as reported in Fig. 2b.The intense transitions at 395 and 464 nm, respectively, are important for application of Ca 9 Eu(VO 4 ) 7 as a color converter in LEDs as they overlap with the emission from near-UV and blue LEDs, respectively. The PL emission spectrum (in red color in Fig. 2a) was measured for excitation at 395 nm in order to mimic the experimental conditions for the pc-LED prototype based on a near-UV LED and Ca 9 Eu(VO 4 ) 7 phosphor, as described below.The spectrum contains two particularly strong emission lines, at 613 nm ( 5 D 0 → 7 F 2 ) and 700 nm ( 5 D 0 → 7 F 4 ), respectively, in agreement with the literature. 23ig. 2c shows the PL emission spectra (550−750 nm), as a function of temperature from room temperature up to T = 793 K, for excitation at 454 nm.The intensity of the emitted light decreases systematically with increasing temperature.Nevertheless, virtually no shift in wavelength of the emission bands is observed, the latter being an important characteristic for technological applications.The PL emission spectra also showed some background in the 550-575 nm region, which is not observed for excitation at 395 nm, cf.Fig. 2a.The origin of this emission is unclear but may be related to the sensitization of Eu 3+ emission under 454 nm excitation as this wavelength region also corresponds to the emission from the lowest 3 T 2 → 1 A 1 and 3 T 1 → 1 A 1 transitions of VO 4 3− at around 462 and 468 nm with a total separation of 278 cm −1 . 23ig. 2d shows the temperature dependence of the emission intensity, as integrated over the wavelength region 605-625 nm, i.e. over the predominant emission line ( 5 D 0 → 7 F 2 band) and normalized to the integrated intensity at the lowest temperature measured (T = 293 K).The integrated emission intensity decreases quite strongly with increasing temperature from T = 293 K to T = 800 K.At T = 423 K, the integrated emission intensity maintains about 65% of that measured at room temperature, which is similar to the previous report by Liu et al. 18 (cf.78% of the room-temperature intensity retained at T = 423 K, under excitation at 465 nm).The thermal quenching temperature, which is defined here as the temperature at which the PL intensity has dropped to 50% of the low-temperature value, is T 50% ≈ 470 K. Current (mA) Intensity near-UV LED (a.u.) Intensity silicone-gel-coated near-UV LED (a.u.) Emission loss due to silicone coating (%) Red-LED prototype.-Fig.3a shows photographs of red LED prototypes coated with the five different phosphor-to-silicone-gel concentrations, under 5 mA current supply.A trend of a decreasing violet emission (395 nm) from the LED base and an increasing intensity of the red emission (613 nm) as a function of increasing phosphor concentration is observed, Fig. 3b.For the phosphor concentration of 270 g/l, the peak intensities of the 395 and 613 nm bands are almost the same, whereas for higher phosphor concentrations, the red emission dominates.For the highest phosphor concentration (670 g/l), the violet light is completely absorbed by the phosphor and only red-light emission is observed. Fig. 3c shows the emission spectra over the red wavelength region for the highest phosphor concentration (670 g/l) as a function of increasing current of the LED base.The spectra show a general increase in intensity as a function of increasing current supply.The integrated intensity over this spectral range shows a linear response with increasing current (inset). To further investigate the nature of the emission from the red-LED prototypes, we examined the light output with respect to the individual LEDs used for each prototype.For a power supply of 5 mA, the five LEDs showed slightly different output intensity; see Table I, which also contains the relevant data for the LEDs coated with phosphor.The violet emission (395 nm) decreases by 78% for the lowest to 99.9% for the highest phosphor to silicone gel concentration.While some decrease (about 30%) can be attributed to absorption in the pure silicone gel, the observed decrease mainly is a consequence of the low Eu 3+ f-f absorption strength.Furthermore, a portion of the 395 nm photons will be converted to 613 nm photons, however, we observe that the output intensity at 613 nm is found to be highest for the prototype with the lowest phosphor concentration and shows a trend of a slightly decreasing intensity for higher concentrations.By assuming that the emission intensity at 613 nm encompasses all the red emission from the LED prototypes, the external quantum efficiency (EQE) which is proportional to the ratio of the number of emitted photons and the number of absorbed photons, is found to be decreasing from 7.33 to 4.13% for the LED with the lowest phosphor content to the highest phosphor content, Table I -last column (lower panel).The electrons in the f-orbital of the Eu 3+ ion are shielded from their surroundings, and are largely unaffected by the external bonding which manifests spectra with very narrow bands in the excitation and emission spectra.Upon comparison of the EQE between Eu 3+ doped samples, and those with Mn 4+ and Eu 2+ doped samples, [13][14][15][16][17] one can clearly observe the higher EQE for the later ones.This can be considered as a downside of the Eu 3+ excitation as it is very sensitive to the host-dopant mismatch and require materials with wider bandgap for providing energy migration channels for Eu 3+ excitation. The LER or the brightness of the light as emitted by the red-LED prototypes, as perceived by the average human was calculated according to the following relationship: where the prefactor 683 lm/W is a normalization factor, V(λ) is the eye sensitivity, and I(λ) the emission spectrum. 24Under an applied current of 1 mA, the prototype with the highest phosphor concentration exhibits a LER of 238 lm/W, which decreased only very slightly, to 235 lm/W, when increasing the current to 18 mA (Fig. 3c, inset), suggesting a high stability of luminescence toward increasing current of the LED base. The colors of the emitted light from the five LED prototypes were further evaluated using a CIE 1931 color space diagram (Fig. 4).The color evolves from being dominated by the violet emission from the LED base at the lowest phosphor concentration to bright red color for the highest phosphor concentration, i.e. very close to the red edge of the color space diagram.Furthermore, we observe that the emission spectrum is dominated by the strong emission band at 613 nm ( 5 D 0 → 7 F 2 ), whereas the other emission band at approximately 700 nm ( 5 D 0 → 7 F 4 ) is much weaker, resulting in a high color purity.The high color purity, temperature stability, and LER makes Ca 9 Eu(VO 4 ) 7 a highly promising phosphor for technological applications. Conclusions To conclude, our results establish that the emission spectrum of the red-emitting phosphor Ca 9 Eu(VO 4 ) 7 is dominated by a sharp emission band at 613 nm under excitation with near-UV light (395 nm).The data presented in the work demonstrates that the Eu 3+ -based phosphor has significant issues with respect to practical implementation.The color of the emitted light is stable upon increasing temperature from T = 293 to T = 793 K, but shows a strong decrease in the emitted intensity.At T = 420 K, the integrated intensity of the 613 nm emission band has decreased to about 65% of that at room temperature, and at T = 470 K it has decreased to about 50%.Furthermore, the performance of Ca 9 Eu(VO 4 ) 7 phosphor was evaluated by coating the phosphor on near-UV LED chips with silicone gel as an encapsulating agent.By this relatively simple phosphor capping technique, we obtain red-LEDs featuring a high red color purity and thermal stability. Figure 1 . Figure 1.(a) PXRD pattern of Ca 9 Eu(VO 4 ) 7 .The black curve shows the experimental data and the red curve calculated data for the whitlockite-type crystal structure.The inset shows the difference between the observed and calculated intensity from crystal structure refinement using the Rietveld method.(b) Schematic diagram of the whitlockite-type crystal structure of Ca 9 Eu(VO 4 ) 7 , with atomic labels. Figure 2 . Figure 2. (a) PL spectra of Ca 9 Eu(VO 4 ) 7 phosphor.The excitation (Exc.)spectrum was measured upon a fixed emission wavelength of 613 nm, and the emission (Emi.)spectrum was measured upon excitation at 395 nm.(b) Diffuse reflectance spectrum of Ca 9 Eu(VO 4 ) 7 phosphor between 380 and 500 nm.(c) PL emission spectra of Ca 9 Eu(VO 4 ) 7 phosphor, as a function of temperature, for excitation at 454 nm.(d) Temperature dependence of the normalized emission intensity for Ca 9 Eu(VO 4 ) 7 phosphor under excitation at 454 nm. Figure 3 . Figure 3. (a) Photographs of the pc-LED prototypes with a phosphor to silicone gel concentration of (1) 133 g/l, (2) 270 g/l, (3) 400 g/l, (4) 530 g/l, and (5) 670 g/l, under a current supply of 5 mA.(b) Emission spectra of the pc-LED prototype coated with different phosphor to silicone gel concentrations, under operation with a current of 5 mA.(c) Emission spectra of the pc-LED prototype, with a phosphor concentration of 670 g/l, as a function of applied current.Inset shows a graph of the integrated emission from 600 to 635 nm as a function of applied current (black), and the LER (red).
3,924.2
2019-09-12T00:00:00.000
[ "Materials Science", "Physics" ]
Strategies for Determining Electron Yield Material Parameters for Spacecraft Charge Modeling Accurate modeling of spacecraft charging is essential to mitigate well‐known and all‐too‐common deleterious and costly effects on spacecraft resulting from charging induced by interactions with the space plasma environment. This paper addresses how limited availability of electron emission and transport properties of spacecraft materials—in particular, secondary electron yields—and the wide ranges measured for such properties pose a critical issue for modeling spacecraft charging. It describes a materials charging database for electron emission properties under development, which facilitates more accurate spacecraft charge modeling when used in concert with the strategies outlined herein. These data and techniques provide tools for more accurate material selection, increased confidence in charge models, and a concomitant decrease in mission risk. They also allow better customization of models in response to prolonged space environment exposure and specific mission requirements, which may evolve materials properties. Introduction The space environment is harsh and can adversely affect mission effectiveness through its interaction with spacecraft, components, and materials (Dennison, 2015;Hastings & Garrett, 1996;Lai, 2011). Indeed, environmentally induced anomalies are dominated by spacecraft charging effects (Koons et al., 1999), with the majority related in some way to electrostatic discharge or charging effects (Bedingfield et al., 1996;Leach & Alexander, 1995). To mitigate these risks, various agencies (e.g., Air Force Research Laboratories, European Space Agency, Japan Aerospace Exploration Agency, and National Aeronautics and Space Administration [NASA]) have devoted significant resources to develop modeling tools for spacecraft charging due to spacecraft interactions with the plasma. Modeling space plasma environment-induced effects on spacecraft requires knowledge of the following: • Environment and impinging fluxes during spacecraft orbits, which are mission-specific and can be incorporated through environmental models and databases (Hastings & Garrett, 1996;Lai, 2011). • Satellite geometry and orientation in the space environment accomplished through charging codes. The three most prominent codes are NASCAP-2k (Mandell et al., 2006;Davis et al., 2000;Mandel et al., 1976;Katz, et al., 1977), European Space Agency (2018), and MUSCAT (Muranaka et al., 2008). • Precise descriptions of the materials used in spacecraft construction, for the specific spacecraft design (Toyoda, 2014;Dennison et al., 2007). • Relevant materials properties characterizing the interaction of these specific materials with the environment and how these properties may change with exposure to the space environment. (Dennison et al., 2002;Dennison et al., 2007;Katz et al., 1977;Parker, 2018) A reliable, comprehensive database of spacecraft materials and the characterization of those materials is being created in the form of a materials charging database to be used in conjunction with existing charging codes . This is being done to provide an improved, more informed modeling tool to be used by researchers for environment-induced spacecraft charging and data validation. The accuracy of charge modeling will be improved as knowledge concerning inputs is increased. Proposed Strategies This paper focuses exclusively on secondary electron yield (SEY), as this material property has been shown to have one of the most critical effects on spacecraft surface charging (Dennison et al., 2007); Lai, 2010;Katz et al., 1986). The paper presents strategies for determining the best available SEY data to use when modeling materials in specific spacecraft applications. To demonstrate the strengths and weaknesses of the different strategies, a simple ubiquitous spacecraft material-aluminum-is analyzed here in detail. A similar study has been done recently for copper (Lundgreen & Dennison, 2018). Aluminum provides an illustrative example, as many studies of ostensibly the same material exhibit a wide variety of reported SEY values (see Figure 1a), which we attribute to the disparities that exist between elemental and technical aluminum with clean, oxidized, contaminated, and rough surfaces. A three-tiered strategy for determining appropriate electron yield material parameters for specific spacecraft charge modeling is proposed. 1. The easiest approach is to select parameterized yield properties from a limited database of materials tabulated for use with the standard charging codes mentioned above (Dennison et al., 2002;Dennison et al., 2007;Drolshagen, 1994;Mandel et al., 1976;Mandell et al., 1993;Mandell et al., 2006;Parker, 2018) 2. A second method involves the review of available literature to identify data of more directly applicable materials not presently tabulated in these databases (e.g., Joy, 1995;Walker et al., 2008). 3. The third, most sophisticated-and most time consuming-method requires selecting materials and specific data sets, which are most mission specific to relevant charging concerns and to possible changes in materials with prolonged exposure to the space environment. Electron yield studies of nominally similar materials often show widely differing results (see Figure 1a). Indeed, even round-robin studies in different laboratories of carefully selected "standard" calibration materials such as Au and graphitic carbon show smaller but still significant, variation in yields (see Figure 2; Dennison et al., 2016); these can be attributed to subtle differences in instrument calibration, measurement methods, and sample preparation differences at the different facilities-details which are seldom provided in standard literature. Indeed, even the definition of "SEY" can differ for different studies and lead to ambiguities (see Appendix A). Select Parameterized Yield Properties From a Limited Database of Materials The easiest method for selecting electron yield material parameters entails selecting parameterized yield properties from a limited database of materials, as tabulated for use with standard charging codes. Table 1 lists the model parameters in the default materials database included with successive versions of NASCAP (Davis et al., 2000;Mandel et al., 1976;Mandell et al., 1993;Mandell et al., 2006). These are used to characterize SEY with the Katz et al. (1977) or far less accurate Feldman (1960) models mutually incorporated in the three NASCAP-2 k, SPENVIS, and MUSCAT codes. The parameters are as follows: • the maximum SEY, δ max ; • the energy E max , associated with δ max ; and • two amplitudes, b 1 and b 2 , and two exponents, n 1 and n 2 , for an analytic biexponential range expression. (Note that there are actually only five independent parameters, including only (b 1 /b 2 ) rather than b 1 and b 2 (Chang et al., 2000;Purvis et al., 1984).) Values selected from such parameterized yield properties tabulated in one of the standard charging codes, unfortunately, • are severely limited for novel materials and more demanding mission requirements; • are occasionally inaccurate or misreferenced; • do not provide the necessary information to identify details about the tabulated materials; • do not reflect the nature of specific composition or surface modifications appropriate to many spacecraft applications; and • do not address the evolution of material properties with space environment exposure. Table 1 lists the SEY parameters in the default materials database for five elemental conductors, three bulk insulating materials, and five spacecraft materials; these values are also included with current versions of SPENVIS and MUSCAT charging codes. The entries in the default material database in Table 1 are certainly severely limited in terms of the number of tabulated common spacecraft materials and do not contain novel materials or materials used for more demanding mission requirements (e.g., carbon fiber/epoxy composites, Table 2. (a) Linear plot of SEY versus energy. The inset legend identifies the lines associated with each study. The "best" representative studies for various conditions selected are highlighted as solid lines. (b) Linear plot of SEY showing datasets classified by surface conditions. (c) Log-log plot of reduced SEY, δ/δ max , versus reduced energy, E 0 /E max . For (b) and (c) the solid, dashed, and dotted lines signify studies of smooth, rough, and unknown surfaces, respectively. The green, red, and blue lines signify studies of clean, oxidized, and unknown contaminated surface coverages, respectively. Bulk Al 2 O 3 (sapphire) SEY curve are indicated with purple lines (Christensen, 2017). multilayer insulation, layered optical materials, conduction enhanced nanodielectric materials, and lubrication compounds). In addition, this database method often incorporates inappropriate or inaccurate values. Sometimes tabulated values are extracted from sources that are not fully documented. Even when specific references are cited, in many instances the original sources are difficult to locate, do not provide necessary information to identify details about the materials studied, or do not reflect the nature of specific composition or surface modifications appropriate to spacecraft applications. As an example, consider the values of δ max and E max for Al in the default NASCAP database (see Table 1; Mandel et al., 1976). These values are not well documented and appear to be significantly lower than many other SEY values for Al (see discussion below; Figure 1, and Table 2). The origin of the NASCAP values is obscured: Mandel et al. (1976) and Katz et al. (1977) cited Dekker (1958) as the source for δ max and E max ; a Dekker plot of δ max versus work function, W, was traced to two different plots from Baroody (1950), one a reduced yield curve of δ/δ max versus E/E max and another of δ max versus W; and Baroody, in turn, cited (Bruining & De Boer, 1938), which lists original experimental data for δ/δ max versus E/E max and δ max , but does not include information on E max . The Bruining dataset is for "secondary electron emission of [an] aluminum layer deposited by sublimation in a vacuum Bruining & De Boer, 1938," although specifics of the level of surface roughness, oxidation, and contamination (in particular, from the 1938 vintage diffusion and getter pumps and glass vacuum system) are not fully identified. Thus, in verifying the patronage and integrity of the default NASCAP values, it appears that these values for δ/δ max versus E/E max and δ max are (at least at some level) for smooth, clean elemental Al, though origin the values used for E max and associated sample conditions are not traceable. The listed low δ max in the database is expected for low-Z materials and is consistent with studies of other clean elemental Al samples, but inconsistent with higher δ max expected for technical Al with more oxidized surfaces. The implications associated with using significantly different yield values for materials not appropriate to spacecraft applications, such as clean smooth elemental Al in lieu of rough, oxidized, and contaminated technical Al, are potentially troubling as they can lead to substantially inaccurate predictions from charging models. Baglin et al. (2000) measured the change in δ max as an oxide layer was removed from a technical Al sample using Ar sputtering. Dennison et al. (2007) performed trade studies of the effects of changing yields on the charging of hypothetical idealized spacecraft in representative space environments, based on evolving SEY measurements of oxidized Al to clean Al to carbon-contaminated Al (Davies & Dennison, 1997) and on clean Au to carbon-contaminated Au (Chang et al., 2000); such changes in SEY due to surface modifications were shown to potentially lead to dramatic threshold charging effects (Chang et al., 2000;Dennison et al., 2007). A cursory, investigation of recent studies returned a substantial list of references, which assumed that the NASCAP default values for Al were appropriate for their modeling, including Davis et al. (2017) It is significant to note that this ambiguity for Al has also been propagated by other international charging codes, including MUSCAT (Nakamura et al., 2018) and European Space Agency (2018). A newer SPENVIS materials database does include a technical Al material, with a rougher more oxidized surface (Drolshagen, 1994). In a similar vein, the value in Table 1 of δ max for Mg is more consistent with other results for smooth, clean elemental Mg (Bruining & De Boer, 1938;Joy, 1995;Walker et al., 2008) than technical Mg with rougher and more oxidized surfaces (Layered bulk MgO has very high δ max of from 3 to 15.). Values in Table 1 of δ max for Ag and Au are both less than 1 and are well below values for smooth, clean elemental surfaces (~1.6 for Ag and 1.4 to 1.8 for Au; Dennison et al., 2016;Joy, 1995, and references therein); these perhaps result from significant carbon-rich contamination layers. The value of δ max for Aquadag™ (a rough coating of colloidal microcrystalline graphite) in Table 1 agrees well with other studies of such forms of carbon (Dennison et al., 2007;Joy, 1995) and is lower, as expected, than that of smooth, clean highly oriented pyrolytic graphite graphite . Table 1 and the default NASCAP database (Mandel et al., 1976;Mandell et al., 1993) list three common bulk insulating spacecraft materials. The values are in reasonably good agreement with other studies of similar materials (Dennison et al., 2007;Joy, 1995, and references therein), although yields of highly insulating materials are notoriously difficult to measure (Hoffmann & Dennison, 2012). There are four spacecraft materials listed in Table 1 and the default NASCAP database. These listings are more values from representative categories of spacecraft surface components than actual materials; even more than the conducting and insulating entries discussed above these are severely limited for myriad functionally similar materials or novel materials, do not provide necessary information to identify details about the components, do not reflect the nature of specific composition or surface modifications appropriate to specific spacecraft applications and more demanding mission requirements, and do not address the evolution of material properties with space exposure. Conductive and nonconductive paint values appear to be derived from the listed values of Kapton™, with only modified conduction properties; conductive paint may be a surrogate for conductive carbon-loaded Black Kapton™ (Dennison et al., 2007). The values listed for solar cell are much more like those of SiO 2 than Si (Joy, 1995;Dennison et al., 2016, and references therein), suggesting that this may be intended to simulate a semiconductor solar cell (of unspecified type) with a thick insulating uncoated SiO 2 coverglass. The composition and thickness of the indium tin oxide coating are not specified; as no information about an underlying substrate is provided, this is presumably bulk indium tin oxide. The fifth entry in this category, SCREEN, is an idealized electron absorbing element with no electron emission, rather than an actual material. Review of Available Literature for Data of More Directly Applicable Materials The second method involves a more extensive review of available literature to identify data of more directly applicable materials not presently tabulated in these databases. This requires investigations into source background information to select materials parameters based on specific knowledge of proposed mission-specific conditions and applications and on materials characteristics known for individual studies. However, selecting appropriate values of δ max and E max from such a thorough literature analysis is often confusing, as data can show a large variation. This is illustrated for representative data from 22 studies of the ubiquitous spacecraft material Al in Figure 1a, which shows SEY curves, and in Table 2, which lists the associated fitting parameters δ max and E max , as well as limited details about the materials studied. Many studies have limited ranges of measured energies (see Figure 1a), making it difficult, or impossible, to determine all the fitting parameters for SEY models. As noted above, often, the literature does not provide sufficient details of sample characterization and preparation, experimental methods, or data analysis to properly choose from myriad and often conflicting results. Again, a word of caution is in order to determine the appropriate use of the terms SEY versus total electron yield (TEY), as discussed in Appendix A. These studies of Al illustrate that in reviewing only a selected number of papers, discrepancies can occur. Note. Mandell et al. (1993). Abbreviation: SEY: secondary electron yield. a Uses Feldman's formula (Feldman, 1960) for SEY, which provides a method for estimating the parameters in terms of the density and stochiomitry of a material even when SEY data are lacking. From Feldman, Dennison et al., 2016). In this round-robin study, where a good agreement for TEY values was expected, significant variations in maximum TEY were found, with values for Au varying from 1.3 to 1.8 and highly oriented pyrolytic graphite varying from 1.3 to 1.5. (see Figure 2). Most spacecraft applications are better served by using data for rougher, heavily oxidized surfaces typical of technical materials. Thus, for spacecraft charging models, it is better to select studies of technical Al Christensen, 2017), and typically have δ max values 2 to 2½ times that of smooth, clean elemental Al-from the multitude of data shown in Figure 1 and listed in Table 2. Compare for example two materials reported by Prokopenko and Laframboise (1980), with δ max values of 0.97 for clean, smooth elemental Al (green dot, Figure 1) and 2.6 for heavily oxidized Al (red dot, Figure 1; Gibbons, 1964). The implications associated with using more appropriate yield values for technical Al, rather than elemental aluminum, were discussed in the previous section. Four representative studies have been identified as most appropriate for technical oxidized rough Al (Dennison et al., 2002), clean, smooth elemental Al (Bruining & De Boer, 1938), highly oxidized Al (Baglin et al., 2000), and bulk crystalline Al 2 O 3 (sapphire; Christensen, 2017); these are denoted as the bold solid lines in Figure 1a and the bold entries in Table 2. Selecting Materials and Specific Data Based on Mission Specifications and Charging Concerns As is evident in the sections above, different studies of ostensibly the same material can have a wide range of values for δ max (see Table 2) and their SEY curves (Figure 1a). While section 2.1 offered some general explanations for trends in δ max , there was insufficient information to distinguish the results of different studies based on the nature of the materials studied. The likely causes for the SEY variations in the identified studies on Al include the following: • variations in bulk composition or material preparation; • surface morphology; • surface contamination and oxidation; • net surface charge of the sample; and • methods of data acquisition and parameterization. These causes, which are often not stated explicitly in the literature, can sometimes be inferred through careful analysis of the full yield curves using a database of multiple SEY measurements or by consideration of prevailing experimental methods when the data were taken. These causes of SEY variations can be partially understood in analogy to photo-induced electron yields or photoyields; photoyields depend on energy transfer due to the photoelectric effect and other interactions of photons with the materials, resulting in absorption, reflection, and transmission curves as functions of incident photon energy (Dennison et al., 2007;Lai & Tautz, 2006). Surface morphology can affect SEY, as illustrated in Figure 3a. Rougher surfaces, with features on the (typically sub-micron) scale of electron penetration depths and with higher depth-to-width aspect ratios, enhance the recapture of emitted electrons through surface collisions, thereby lowering SEY (Baglin et al., 2000;Bergeret et al., 1985;Wood et al., 2019). The effects of surface roughness are less for higher energy backscattered electrons, which have a narrower distribution of emission angles than lower energy secondary electron (SE; Nickles et al., 2000;Reimer, 1985;Wood et al., 2019). By contrast, smooth surfaces minimize recapture by maximizing the solid angle for the escape of emitted electrons without further collisions with the surface. The effects of surface roughness are more pronounced at lower incident energies, where more SEs tend to be generated near the surface. Common methods affecting surface roughness include material preparation, deposition, or formation of high aspect ratio textured or dendritic surfaces, chemical etching, mechanical abrasion, polishing, sputtering, and thermal annealing. Such methods are routinely used to intentionally reduce electron emission from surfaces (Baglin et al., 2000;Bergeret et al., 1985;Montero et al., 2016;Wood et al., 2019). Surface coatings can also change SEY (Baglin et al., 2000), although their effects on SEY are more nuanced and varied than the effects due to roughness (Wilson, 2019). Coatings of low-Z conducting materials (e.g., C) will typically lower SEY, while high-Z conducting materials (e.g., Au) will typically increase SEY, though thin surface layers can produce complicated incident energy-dependent effects from the underlying substrate (Wilson, 2019). The presence of absorbed water vapor can significantly increase SEY; for example, for Al or Cu surfaces, condensation of water can greatly enhance yields, while a vacuum bakeout has been shown to reduce this increase (Baglin et al., 2000). Similar changes in yield can be affected by ion bombardment with sputtering or ion glow discharge using various gases, which can act to either increase or decrease the SEY (Baglin et al., 2000). Two common coating effects considered explicitly here are the formation of oxide layers and carbon-rich contamination layers. In many cases formation of highly insulating oxides (e.g., Al 2 O 3 and SiO 2 ) can significantly increase the elemental material yields. The formation of semiconducting oxides (e.g., Cu 2 O) typically acts to reduce yields. Note that the increase in δ max as the oxide layer on Al increases for clean, smooth elemental Al (Bruining & De Boer, 1938) to technical oxidized rough Al (Dennison et al., 2002) to highly oxidized Al (Baglin et al., 2000) to bulk crystalline Al 2 O 3 (sapphire; Christensen, 2017; see Table 2 and their SEY curves Figure 1a). Also note how the SEY curves of two highly oxidized studies in Figure 1a (Baglin et al., 2000;Copeland, 1935) follow the yield curve for sapphire up to~350 eV, then deviate and begin to approach less oxidized technical and elemental Al at high energies; this is consistent with the incident electrons reaching sufficient energy to penetrate the oxide layers. Carbon-rich contamination layers are often formed under electron bombardment; this is a phenomenon well known to electron microscopists (Reimer, 1985). Formation is believed to result from ionization of residual carbon species in the vacuum system (e.g., CO, CO 2 , and hydrocarbons) or molecules desorbed from surfaces during electron irradiation, which are then propelled toward the sample surface by the electron beam, and subsequently cracked leaving disordered C-rich surface layers (Andritschky, 1989;Baglin et al., 2000). C-rich surface layers are commonly encountered in SEY studies, from studies in lower vacuum (e.g., scanning electron microscope systems) and systems employing diffusion pumps (e.g., most-but not all-studies done prior to the mid-1960s). C-rich surface layers are similarly present in space applications. Indeed, Caroline Purvis (1995)-one of the central developers of the original NASCAP code-once quipped, "All spacecraft surfaces eventually turn into carbon" via deposition of organic contamination and outgassing. Net surface charge of a sample-from either an applied bias or accumulated charge-can affect SEY (Hoffmann & Dennison, 2012), as illustrated in Figure 3c. Negatively charged samples (V bias < 0) will repel emitted SE and SEY will be largely unchanged. Positively biased samples (V bias > 0) will reattract low-energy SE, and SEY will decrease. Although this effect is typically not considered in spacecraft charging codes, it is important to recognize that it may well occur in materials studies measuring yields from bulk insulators, nonconductive coatings, or biased samples as these materials are most likely to retain charge (Hoffmann & Dennison, 2012). Olano et al. (2019) describes an interesting system where surface roughness and charging effects of conductor/dielectric composites are evident. The studies for conducting Al discussed here were all taken at or near room temperature (though specific temperatures were seldom cited). Temperature is not expected to have a large effect on SEY for conducting materials; this has been confirmed by limited studies at studies both above and below room temperature (Nickles, 2002). By contrast, there may be modest temperature effects for SEY in semiconducting and smaller bandgap insulating materials (Grais & Bastawros, 1982;Nickles, 2002) where electron-hole pair creation and recombination can significantly affect carrier concentrations of more weakly bound electrons (Alig & Bloom, 1975), which are most likely to be involved in SE emission (Nickles, 2002); this is also borne out by limited experimental studies (Grais & Bastawros, 1982). The same data sets shown in Figure 1a are shown in Figure 1b with the yield curves color coded to indicate surface morphology (smooth, rough, and unknown) and surface layers (clean, oxidized, C-rich coatings, and unknown surface layers). The increasing trend in δ max with increased thickness of oxide layer noted above becomes much more evident in Figure 1b. Similarly, though to a lesser extent, an increase in E max with increased oxide thickness can also be identified. A novel method for determining material characterization is outlined next, which involves the use of reduced format SEY curves. Figure 1c shows the same Al studies from Figures 1a and 1b, plotted in a reduced format (δ/δ max versus E 0 /E max ) on log-log axes. This method produces reduced yield curves with a consistent "inverted V" shape, which emphasizes the power-law behavior of the yield curves for the reduced data well above or below E 0 = E max . The reduced yield curve is modeled with a reduced power-law yield model: where E 0 is the incident energy and r o is a constant fully determined by n, m, and E max . Wood et al., 2019) This is similar to one of the SEY models employed in SPENVIS (Sims, 1992). The parameters m and n determine the slopes of the log-log plots of SEY well above and below E 0 /E max = 1, respectively. Figure 1b emphasizes trends in the parameters δ max and E max , whereas the reduced yield curves in Figure 1c emphasize trends in the parameters n and m, as δ max and E max have been factored out in the reduced format. Table 2 lists these four fitting parameters for the 22 Al studies plotted in Figure 1. Bulk smooth Al 2 O 3 (sapphire) is also included as it represents a limiting case for oxidation, as the bulk limit of an infinitely thick fully oxidized aluminum sample (Christensen, 2017). Each study in Table 2 has been characterized in terms of surface morphology as smooth or rough and in terms of surface layers as clean, oxidized, or C-rich contamination. The designations are subjective and are classified as unknown when there was insufficient information given in the source study. The conventions established for the plotting symbols for each study used in Figure 4 based on these designations are shown below Figure 4, as are the line symbols used for Figures 1b and 1c. Figures 4a-4d show plots of these four fitting parameters, using the designated plotting symbols to visualize the relationship between surface conditions and the fitting parameters. Using the results displayed in Figures 1b and 1c and 4a-4d and Table 2, we have attempted to establish correlations between the various yield curves and their surface properties. Studies of specific samples treated so as to explore a range of oxidation layer thicknesses have established a trend for higher δ max for oxidized surfaces (Baglin et al., 2000;Bruining & De Boer, 1938;Chang et al., 2000). The curves displayed in Figure 1b in general confirm this trend, with most oxidized surfaces (red curves) lying between a lower bound for smooth clean Al (green curves) and bulk Al 2 O 3 (purple curve); the same is true for δ max values plotted in Figure 4a with clean Al with δ max < 1 and oxidized Al with 2 < δ max < 4. However, these trends are not as obvious when considering the full SEY curves, most likely as a result of other differences between the various studies including roughness, C-layers, experimental methods, and calibration. E max values shown in Figure 4b in general show lower values for clean samples (green symbols) and higher values for rough or oxidized samples (open or red symbols); an approximate boundary occurs at~0.34 keV as indicated in Figure 4b. Again, this trend is not as immediately apparent in the yield curves of Figure 1b. Correlations between the slopes n and m of the reduced yield curves in Figure 1c-where the dependence on δ max and E max have been removed through the use of a reduced form-allow for further distinguishing between sample characteristics. Figure 4c shows that oxidized samples (red symbols) and rough (open symbols) have consistently larger slopes n for SEY below E 0 /E max = 1 than smooth samples (solid purple or green symbols; Bronshtein's low energy slope is anonymously large), with an approximate boundary at n~0.45 as indicated on Figure 4c. The curves displayed in Figure 1c corroborate this trend, with all smooth surfaces (solid curves) lying below rougher surfaces (dashed curves). In Figure 4d m slope values tend to be lower for clean smooth surfaces (solid green symbols) than for rough samples (open symbols). Oxidized samples (red symbols) have m values between clean surfaces (green symbols) and heavily oxidized sapphire (purple symbol). These trends are born out in the order of lines in Figure 1c for E o > E max , with oxidized (red) curves falling between clean (green) and heavily oxidized sapphire (purple) curves. These apparent trends identified above are not entirely consistent, as there are exceptions and complications owing to multiple surface modifications; but, for the most part, the conclusions are supported. In general, the observed trends are consistent with physics-based expectations discussed at the beginning of this section. Conclusions and Future Work Careful selection of appropriate materials SEY data can provide significantly improved modeling of spacecraft charging (Dennison et al., 2007). Skill is required in selecting material studies based upon mission specifications and charging concerns, as they are related to environment and material choices for specific mission requirements. Specifically, for Al, use of values for technical alloys with rough surfaces and thicker oxide layers is most often preferred over values for elemental clean, smooth surfaces for beginning-of-life space simulations and technical Al with thin C-rich contamination is often more appropriate for end-of-life modeling. Thus, utilizing only the default tabulated NASCAP SEY values for Al best suited for clean, smooth elemental Al can often introduce large uncertainties in spacecraft charging models. For this reason, care must be made in selecting specific data sets that are applicable to mission specifications and the charging concern associated with the environment and objectives proposed. The bold denoted data sets in Figure 1a and Table 2 offer three studies of Al SEY that are deemed representative of clean, smooth, elemental Al (Bruining & De Boer, 1938); heavily oxidized, rough Al (Baglin et al., 2000); and technical Al with modest oxidation and unpolished surface as is commonly encountered in typical spacecraft operation (Dennison et al., 2002). Analysis of the data collected for the USU SEY database was critical to determine these best studies. Trends observed in fitting parameters for numerous reported SEY studies under varying sample conditions can be exploited to the spacecraft modeler's advantage to identify which experimental studies best match conditions for a specific space mission. This requires knowledge of both the specific mission environments, objectives, and materials, as well as the potential causes of variations in SEY of the materials. This evaluation can identify which studies of similar materials are most applicable to a specific mission and can also provide guidance on the extent of changes expected from environmentally-induced materials surface evolution. For example, many samples will develop an oxide coating (typically 0.001 to 0.1 μm) prior to launch or as they are exposed to atomic oxygen in space, many sample surfaces will develop C-rich contamination layers due to outgassing (typically 0.001 to 1 μm), or they will develop some type of roughened surface (roughness on the order of 0.1 to 10 μm) due to mechanical treatment of the material or to environmental effects such as ion-sputtering from the solar wind. To facilitate this approach to improved materials modeling, a database of multiple SEY studies is being compiled with the capability to sort and identify individual data sources based upon materials characteristics of the various studies. While the database has not yet been made available to the public, work is ongoing with the NASA Engineering and Safety Center to make this resource available to the spacecraft charging community. Appendix A A word of clarification on the definition of SEY in the context of spacecraft charging codes is necessary. The electron yield of a material is universally defined as the ratio of emitted electrons per incident electron. This is traditionally separated into two subcategories, SEY and backscattered electron yield (BSEY). From an operational perspective, the separation is made in terms of the energy of the emitted electrons: SEs are emitted with energies <50 eV, while backscattered electrons are emitted with energies >50 eV (Sternglass, 1954). This operational distinction is used in the spacecraft charging community and the NSM charging codes, in scanning electron microscopy literature (Joy, 1995;Reimer, 1985), and numerous other fields. Therefore, this operational definition of SEY is also the one used for data presented in this paper. From an alternate physics perspective, the separation is made in terms of the origin of the emitted electrons: backscattered electrons originate in the incident beam and can undergo one or more quasi-elastic collisions before escaping back out of the surface of the material; alternately, SEs, originate in the material, are excited into mobile states by energy deposited by incident electrons, and escape the material. These are sometimes referred to as "true secondary electrons" (Czaja, 1966). Physical models of electron emission-including equation 1 presented in section 2.3-are usually based on this physics perspective. The sum of BSEY and SEY gives the total number of emitted electrons per incident electron, which is called the TEY. Some researchers use the term "secondary electron yield" (SEY) to mean the same thing as TEY, without differentiating between the two mechanisms, which produce emitted electrons. Most notably this potentially ambiguous use of SEY has been adopted by the European space community as a standard definition (Space Engineering, 2013), even though the models used in SPENVIS make the clear distinction between SEY and BSEY as the two components for the total electron emission (European Space Agency, 2018). This fails to adequately model electron yield and often creates confusion, so it is important to distinguish between the two uses of SEY. Also, some studies of electron yield (e.g., Baglin et al., 2000;Czaja, 1966) -or more commonly, some compilations of electron yield studies-fail to identify whether measured SEY refers to TEY or SEY. For many applications, the difference between TEY and SEY is not critical, as the BSEY yield is usually a modest fraction of the total yield and reasonably constant over intermediate incident energies. However-for more precise studies, for studies emphasizing low incident energies or high incident energies where BSEYs have a smaller or larger contribution, respectively, or for materials where the BSEY contribution is a larger fraction of TEY (e.g., higher atomic number metals)-misidentification of SEY or TEY values can introduce significant error.
8,015
2020-04-01T00:00:00.000
[ "Engineering", "Physics" ]
Preclinical development of 1B7/CD3, a novel anti-TSLPR bispecific antibody that targets CRLF2-rearranged Ph-like B-ALL Patients harboring CRLF2-rearranged B-lineage acute lymphocytic leukemia (B-ALL) face a 5-year survival rate as low as 20%. While significant gains have been made to position targeted therapies for B-ALL treatment, continued efforts are needed to develop therapeutic options with improved duration of response. Here, first we have demonstrated that patients with CRLF2-rearranged Ph-like ALL harbor elevated thymic stromal lymphopoietin receptor (TSLPR) expression, which is comparable with CD19. Then we present and evaluate the anti-tumor characteristics of 1B7/CD3, a novel CD3-redirecting bispecific antibody (BsAb) that co-targets TSLPR. In vitro, 1B7/CD3 exhibits optimal binding to both human and cynomolgus CD3 and TSLPR. Further, 1B7/CD3 was shown to induce potent T cell activation and tumor lytic activity in both cell lines and primary B-ALL patient samples. Using humanized cell- or patient-derived xenograft models, 1B7/CD3 treatment was shown to trigger dose-dependent tumor remission or growth inhibition across donors as well as induce T cell activation and expansion. Pharmacokinetic studies in murine models revealed 1B7/CD3 to exhibit a prolonged half-life. Finally, toxicology studies using cynomolgus monkeys found that the maximum tolerated dose of 1B7/CD3 was ≤1 mg/kg. Overall, our preclinical data provide the framework for the clinical evaluation of 1B7/CD3 in patients with CRLF2-rearranged B-ALL. INTRODUCTION Acute lymphocytic leukemia (ALL), the most common form of leukemia in childhood and adolescence, is characterized by clonal expansion of lymphoid progenitor cells present in the bone marrow (BM), blood, and extramedullary sites [1].In the past decade, tremendous progress has been made in the treatment of ALL with the development of targeted therapies, including tyrosine kinase inhibitors of BCR::ABL1, monoclonal antibodies, bispecific antibodies (BsAb), and chimeric antigen receptor T-cell therapy targeting cell surface antigens [2].However, while survival of childhood ALL approaches 90%, only 30-40% of adult patients achieve long-term remission [3,4]. BCR-ABL1-like, or Ph-like, ALL is a recently identified category of B-ALL with a poor prognosis [5,6].This high-risk disease carries a gene expression signature similar to that of Ph+ ALL but without the BCR::ABL1 translocation, as well as genomic alterations that activate several types of kinase signaling pathways [7,8].Rearrangement of cytokine receptor-like factor 2 (CRLF2), which encodes TSLPR [9], is found in approximately 50% of patients diagnosed with Ph-like B-ALL and is common in patients with Down syndrome [10].CRLF2 rearrangements occur either as a translocation to the immunoglobulin heavy-chain enhancer region (IGH::CRLF2) or by a deletion of upstream PAR1 that leads to the fusion of CRLF2 to adjacent P2RY8 [11].Activating point mutations of Phe232Cys (F232C) have also been detected [8,12,13].Subsequently, genetically aberrant CRLF2 dysregulates TSLPR expression and cooperates with mutations in JAK kinases to activate the JAK/STAT pathway, signaling downstream of the TSLPR/IL-7α heterodimeric receptor complex [14].Importantly, TSLPR-dependent signaling promotes B-cell leukemogenesis thus suggesting that targeting TSLPR with blocking or depleting strategies could be therapeutically promising.Accordingly, administration of the TSLPRα blocking antibody 1E10 has been shown to inhibit both TSLP-triggered cell proliferation and STAT transcription factor activation [15], and TSLPR chimeric antigen receptor T-cell (CAR-T) therapy has been shown to eradicate human CRLF2-overexpressing ALL in xenograft models [16]. In recent years, redirection of T cells against tumors using BsAb such as blinatumomab, a bispecific monoclonal antibody that targets CD19, has been shown to induce high response rates in relapsed/refractory B-ALL patients.One major advantage of BsAb over CAR-T is its availability "off the shelf", which reduces cost and eschews the time needed for CAR-T production [17].However, responses with blinatumomab are relatively short, with a 12month event-free survival of approximately 20% [18,19].Further, clinical use of blinatumomab is limited due to a short half-life and frequent antigen loss or downregulation [20,21].Here, we report on a novel anti-TSLPR BsAb, named 1B7/CD3.We characterized the biophysical properties, in vitro function, as well as demonstrated the efficacy, safety, and prolonged half-life.Together, our results suggest 1B7/CD3 be a promising novel treatment for patients with CRLF2-rearranged B-ALL. METHODS Antibody discovery, top clinical lead selection, and BsAb construction H2L2 human transgenic mice were purchased from Harbor BioMed (Boston, MA).Single B-cell Cloning (SBCC) technology was utilized to amplify variable heavy and variable light gene regions from the cDNA of TSLPR-Fc immunized H2L2 mouse memory B-cells.The naturally paired variable regions were cloned into pcDNA3.1 + hIgG1 and hKappa expression vectors for subsequent ExpiCHO high throughput transient expression in 96-well format.Following Protein-A magnet beads purification, binding avidity was assayed by Bio-Layer Interferometry (BLI) with anti-Human IgG Fc capture (AHC) biosensors.Positive clones were then expressed for further cell surface EC50, cross-reactivity, epitope binning, ligand blockage, and stability assays. To generate the 1B7/CD3 BsAb, 1B7 heavy and light chain gene regions were cloned into hIgG1 or hKappa pcDNA3.1+expression vectors respectively.Xencor CD3 antibody and heterodimeric Fc technology were applied for the BsAb.1B7/CD3 was expressed in ExpiCHO cells through transient transfection and then purified using Protein-A and cation affinity columns (AKTA FPLC system).The binding of 1B7/CD3 with the TSLPR/CD3 antigen was also tested by BLI. Cell lines and primary B-ALL samples B-ALL cell lines MHH-CALL4 (DSMZ, Germany) and REH (ATCC, Manassas, VA) were used in the study.REH-TSLPR-Luc [16] was created by sequential lentiviral transduction with a human CRLF2 plasmid (GeneCopoeia, Rockville, MD) and a firefly luciferase plasmid (Addgene, Watertown, MA) followed by FACS sorting.Primary B-ALL cells were from MD Anderson Cancer Center (MDACC) with informed consent obtained, and BOS-1 patient-derived xenograft (PDX) models harboring the IGH::CRLF2 rearrangement were obtained from Dr. David Weinstock [21] at Dana-Farber Cancer Institute.Peripheral blood mononuclear cells (PBMC) from human volunteers were isolated from Buffy Coats (Gulf Coast Blood Bank, Houston, TX) using a Ficoll-Paque density gradient (GE Healthcare, Chicago, IL), and the protocol was approved by insitiutional review board.Human bone marrow was purchased from Lonza (Basel, Switzerland).Cynomolgus bone marrow was purchased from Humancells Bioscience (Milpitas, CA). TSLPR expression and functional assays Primary B-ALL samples were stained with anti-CD45, anti-CD19, and anti-TSLPR (BioLegend, San Diego, CA).For functional assays, GFP + REH-TSLPR cells were incubated with 1B7/CD3 or a control BsAb before addition of CD8 + T cells.The levels of the activation marker CD69 were shown on CD3 + CD8 + GFP-cells.All sample acquisition was performed using fluorescence-activated cell sorting (FACS) and analyzed using FlowJo v10.5 (Ashland, OR).For killing assay, REH-TSLPR, MHH-CALL4, and Alexa Fluor 750 tagged T cells or bone marrow cells were incubated with different concentrations of 1B7/CD3 or a control BsAb before adding T cells.Cell viability was determined by FACS. Animal models Animal, dosing, and tumor measurement.All animal experiments were conformed to the relevant regulatory standards and approved by the Institutional Animal Care and Use Committee at MDACC.Sample size selection was based on literature [22].NOD.CgPrkdc scid Il2rg tm1Wjl /SzJ (NSG) mice (The Jackson Laboratory) were intravenously (iv) injected with Reh-TSLPR-Luc cells or Bos-1 PDX cells.Once bioluminescence from REH reached around 10 7 p/sec or BOS-1 number reached around 20% in blood, 10 × 10 6 PBMC/mouse were injected I.V. for humanization, followed by stratified randomization with 5 mice in each group and treatment with 1B7/CD3 or vehicle via intraperitoneal (I.P.) injection weekly for three weeks.Leukemia burden was assessed by imaging or FACS. Pharmacokinetic (PK) analysis of 1B7/CD3 in mice Single dose PK was studied in naive and PBMC humanized NSG mice.For PK analysis in NSG mice, animals were randomized into treatment groups and were administered 0.3, or 1 mg/kg 1B7/CD3 by an I.P. injection.For PK analysis in humanized NSG mice, 1 × 10 7 PBMC was administered through iv injection into mice 10 days prior to randomization for dosing.Blood samples were collected at indicated time points and processed to plasma for bioanalytical and PK analyses. Toxicity of 1B7/CD3 in cynomolgus monkeys General toxicity.In the exploratory dosing finding toxicity study, three adult, female cynomolgus monkeys were randomized into treatment groups and received 0.3, 1, or 3 mg/kg 1B7/CD3 via iv bolus on Day 1 and Day 8. Monkeys were monitored closely for moribundity and mortality.Whole blood was processed for clinical pathology at indicated time points.Organs/tissues were harvested for gross finding and histopathology (Supplemental Table 3). Blood was collected at indicated time points to evaluate 1B7/CD3 exposure, T cell activation, and cytokine release.Briefly, blood was processed into plasma for bioanalytical analysis.Concentrations of 1B7/ CD3 in plasma were measured as described in PK assay.Cytokine release was analyzed with a Multiplex Non-Human Primate Cytokine Magnetic bead-based immunoassay (Millipore) using Luminex technology.For T cell activation, blood cells were stained with anti-CD3, CD69, and CD25 antibodies (BioLegend).Data were acquired by FACS and analyzed using FlowJo v10.5. Source of all antibodies is listed in Supplementary Table 4. Statistical analysis.The investigators were not blinded to the group allocation during the experiment.Statistical drug-treated and control groups were determined by using a student's t-test or one-way analysis of variance (ANOVA) using GraphPad Prism 9. Nonparametric tests were used when t-test were not applicable.Significant differences were indicated by p < 0.05. Data sharing statement For original data, please contact<EMAIL_ADDRESS> Identification of anti-TSLPR antibodies and 1B7/CD3 bispecific construction To identify anti-TSLPR antibodies, fully humanized TSLPR-specific monoclonal antibodies were first generated from immunized H2L2 Harbour mice using SBCC technology (Supplementary Fig. 1A-C), and a total of seventy clones with unique sequences were discovered with variable affinities (Supplementary Fig. 1D-H).Clone 1B7, with an affinity of 2.82 nM (Fig. 1A), was selected as the final candidate based on cynomolgus cross-reactivity (Fig. 1A), and developmental properties. To generate bispecific 1B7/CD3 constructs, the 1B7 heavy and light chains were cloned into hIgG1 or hKappa expression vectors, respectively (Fig. 1B).Xencor's CD3 BsAb technology [23] was applied to create a BsAb with an Fc-silent region and an affinity of 0.88 nM to TSLPR (Fig. 1C).The BsAb was found to be stable without any apparent aggregation or degradation (Fig. 1D).Tandem binding of CD3 and TSLPR by the BsAb was then confirmed by BLI (Fig. 1E). 1B7/CD3 activates T cells and triggers antigen-specific tumor lysis We first characterized antigen levels by assessing TSLPR expression in primary Ph-like B-ALL patient specimens harboring CRLF2 rearrangements.Our data revealed comparably high expression of TSLPR and CD19 across patients (Fig. 2A), which strongly supported targeting TSLPR as a novel approach for B-ALL therapy. The ability of 1B7/CD3 to activate T cells and to kill B-ALL cells in vitro was then investigated.Activation of CD8 T cells, as demonstrated by the strong upregulation of CD69 when exposed to REH-TSLPR cells (Fig. 2B), was induced by 1B7/CD3 but not by the control antibody, thus indicating that CD8 T cell activation was antigen specific.Interestingly, in a similar experiment that used normal PBMC as the target, 1B7/CD3 only induced minimal activation of CD8 T cells, suggesting that TSLPR levels in normal PBMC were too low to trigger a strong T cell response (Fig. 2C).Additionally, incubation with 1B7/CD3 was shown to induce the activation of Jurkat T cells equipped with an NFAT luciferase reporter when co-cultured with REH-TSLPR cells expressed escalating levels of TSLPR, MHH-CALL4 ALL cells, and primary ALL blasts (Fig. 2D-E).Furthermore, 1B7/CD3 treatment was selectively efficacious at mediating the killing of REH-TSLPR and MHH-CALL4 cells, but not normal T cells or BM cells (Fig. 2F, Supplementary Fig. 2A-C).Overall, our data suggest that 1B7/CD3 induced antigen-specific activation of T cells leading to the elimination of ALL cells in vitro. 1B7/CD3 inhibits tumor growth in xenograft models The in vivo anti-leukemia activity of 1B7/CD3 was first tested in REH-TSLPR-Luc ALL xenograft model.Tumor growth was significantly inhibited in humanized mice treated with 1B7/CD3 when compared to those treated with vehicle (Fig. 3A-B).To confirm the anti-tumor efficacy of 1B7/CD3 in a different donor and assess for dose-dependency, we administered escalating doses of 1B7/CD3 to mice humanized with donor 879.Our findings show that 1B7/CD3 treatment inhibited tumor growth in a dose-dependent manner, with the 1 mg/kg dose inducing the strongest tumor growth inhibition, 0.1 mg/kg inducing moderate growth inhibition, and <0.1 mg/kg inducing no growth inhibition (Fig. 3C).Importantly, nearly complete leukemia clearance in the BM was observed after 1 mg/kg of 1B7/CD3 treatment, resulting in undetectable ALL in 3/5 mice and exceptionally low levels of residual ALL (0.01 and 0.05% of tumor in live cells) in the other two animals (Fig. 3D). To monitor changes in T cell dynamics, activation, and phenotype, peripheral blood was collected from mice at indicated time points.Consistent with the observed dose-dependent response on tumor growth, the percentage of CD3 significantly increased in mice treated with 1 mg/kg1B7/CD3, but not in other groups, when compared to those treated with vehicle (Fig. 3E).Additionally, 1 mg/kg 1B7/CD3 induced a T cell phenotype shift from effector memory T-cells (T EM ) to effector memory reexpressing CD45RA T cells (T EMRA ) as well as an increase in the stem memory T-cells (T SCM ) population at week 2 and week 3 of treatment when compared to levels among vehicle-treated mice (Fig. 3F). Since PDX models have been shown to better recapitulate patient responses, we assessed the anti-ALL activity of 1B7/CD3 treatment in Bos-1 B-ALL xenograft models by FACS analysis.To address the variation of efficacy caused by effector cells from different donors, PBMCs from two donors, 076 and 875, were used for the assessment of anti-tumor efficacy.Overall, 1B7/CD3 treatment induced tumor regression in a dose-dependent manner in the blood, spleen, and BM of both humanized models (Fig. 4A-B).However, and consistent with findings in CDX models, anti-tumor efficacy varied between donors (Fig. 4A-B), with tumor regression observed after 1 and 0.1 mg/kg 1B7/CD3 treatment in the 875 humanized model, but only after 1 mg/kg 1B7/CD3 treatment in the 076 humanized model (Fig. 4A-B). Treatment-induced effects on T cell dynamics and phenotype were then assessed.When compared to vehicle-treated group, administration of 1 mg/kg, but not 0.1 mg/kg, 1B7/CD3 increased CD69 expression in T cells (Fig. 4C) as well as induced cell phenotype shifting from T EM to T EMRA and increased T SCM populations in both blood and spleen from 076 humanized mice (Fig. 4D).Consistently, a dose and time-dependent increase in T cell concentration was observed in the blood, spleen, and BM of 076 humanized mice treated with escalating doses of 1B7/CD3 when compared to those treated with vehicle, with the strongest increases induced by 1 mg/kg 1B7/CD3 treatment (Fig. 4E).By contrast, and in agreement with previous data (Fig. 4A-B), both 1 and 0.1 mg/kg 1B7/CD3 doses induced strong but similar increases in T cells over time in 875 humanized model when compared to that induced by vehicle (Supplementary Fig. 3A-C).Follow-up IHC analyses confirmed 1B7/CD3 treatment-induced increases in T cells in the BM and spleen of both 076 (Fig. 4F) and 875 humanized mice (Supplementary Fig. 3D).Most importantly, a marked decrease in leukemia cells, and correspondingly, a constitution of normal differentiated mouse myeloid lineage, was observed by H&E staining in the BM of 076 humanized models treated with either 1 or 0.1 mg/kg 1B7/CD3 when compared to that in vehicle-treated groups (Fig. 4G). Durable PK in mice Pharmacokinetic analyses of NSG mice treated with 0.3 and 1.0 mg/kg 1B7/CD3 revealed a durable PK profile in serum and an approximate half-life of 9-10 days (Fig. 5A; Supplementary Table 1).This approximate half-life is similar to that of normal IgG antibody and markedly longer than the reported half-life of blinatumomab [24] (Fig. 5A; Supplementary Table 1).The potential for targetmediated drug disposition (TMDD) during the binding of the CD3 arm of the 1B7/CD3 to human T cells was then assessed with a second PK study using serum harvested from PBMC humanized NSG mice.Compared to findings in NSG mice, 1B7/CD3 demonstrated a relatively shorter half-life of 4-5 days and lower Fig. 2 1B7/CD3 bispecific antibody (BsAb) induced a strong antigen-specific T cell activation and tumor cell killing in vitro.A Comparable expression of TSLPR and CD19 in primary B-cell acute lymphocytic leukemia (B-ALL) samples.TSLPR and CD19 expression were determined on gated live CD45/TSLPR or CD45/CD19 double-positive cells respectively.B Activation of CD8 + T cells by 1B7/CD3 BsAb.REH-TSLPR cells (GFP+) were incubated with 1B7/CD3 BsAb or a control (inactive) BsAb before addition of human primary CD8 + T cells followed by culturing for 48 h.Levels of the activation marker CD69 are shown on CD3 + CD8 + GFP− cells.C 1B7/CD3 BsAb shows low levels of activation with human PBMC.1B7/CD3 or inactive control BsAb 10 µg/ml were added to peripheral blood mononuclear cells (PBMC) from a normal donor.Levels of CD69 expression, as an indication of CD8 + T cell activation, were measured.D, E Expression of the NFAT luciferase reporter was triggered by 1B7/CD3 BsAb, but not the control antibody, in a reporter assay.D Low, medium, and high TSLPR expression (~1000, 2500, and 30,000 receptors/cell, respectively) in REH cells.Expression of TSLPR in (D) MHH-CALL4 ALL cells (native expressers of TSLPR, ~4000 copies) or in (E) cells from a primary patient sample (~6000 copies/cell).Cells were coated with 1B7/CD3 BsAb or control antibody before the addition of Jurkat cells expressing the NFAT luciferase reporter.The NFAT expression signal was measured 24 h.post activation and was expressed in relative light unit (RLU).F REH-TSLPR and MHH-CALL4 ALL cells were incubated with different concentrations of 1B7/CD3 BsAb or control antibody before adding human T cells at a 5:1 ratio (E: T).Flow cytometry was used to determine the cell viability after 48 h.treatment. Endurable safety in cynomolgus monkeys To determine whether cynomolgus monkeys were the most appropriate models for predicting the toxicity of 1B7/CD3 treatment in patients, we first demonstrated that the binding properties of cynomolgus monkey PBMCs to anti-TSLPR and the BsAb of 1B7/CD3 were similar to those of human PBMCs. Anti-TSLPR antibodies showed negligible or no binding to T cells (Supplementary Fig. 4A), hCD14, hCD16, hCD20, and hCD56 positive populations (data not shown).While consistent with literature [25], there is a detectable expression of TSLPR in DCs derived from human monocytes but not B cells from cynomolgus monkey BM (Supplementary Fig. 4B, C).In contrast, 1B7/CD3 bound to T cells gated as CD3 + PBMC, thus indicating that the CD3 arm of the BsAb binds properly to both human and cynomolgus monkey T cells (Supplementary Fig. 4A).An exploratory toxicity study was then conducted to evaluate the safety of 1B7/CD3 in vivo.Initially, doses of 3, 1, or 0.3 mg/kg were evaluated in cynomolgus monkeys, but the highest dose caused progressive lethargy and resulted in euthanasia 4 h after the first dose.Doses of 1 and 0.3 mg/kg resulted in appropriate antibody exposure, with the second dose of 1 or 0.3 mg/kg reaching a Cmax similar to or slightly lower than its respective initial dose.Additionally, antibody concentration declined in half 24 h post-dosing.Notably, the plasma concentration of 1B7/CD3 after receiving an initial dose of 0.3 mg/kg was found to be 536 ng/ml on the day before second dosing (Fig. 6A), which is still well above the picomolar range EC 50 observed in vitro (Fig. 6A).Importantly, the 1 mg/kg dose was tolerated with mild and transient treatment-related adverse events (AE), and the 0.3 mg/kg dose was well-tolerated with minor treatment-related AEs.Specifically, for both 1 and 0.3 mg/kg dosage groups, white blood cell, monocyte, and lymphocyte counts briefly declined after the first dosing and gradually rebounded once the dosing was stopped (Fig. 6B-D).Follow-up histopathological analysis found no lesions that could be specifically attributed to 1B7/CD3 treatment in cynomolgus monkeys.Together, these data suggest that AEs associated with 1B7/CD3 treatment are dosedependent. Consistent with the hematology data, we observed a steep decrease in T cells at 24 h post first dose and, to a lesser extent, at 24 h post second dose, followed by a modest rebound of T cell quantity at 7 days after dosing in cynomolgus monkeys treated with 1B7/CD3 (Fig. 6E).Correspondingly, the CD69 percentage in T cells increased significantly after the first dose and remained at a steady state without further rise after the second dose of 1 or 0.3 mg/kg 1B7/CD3 (Fig. 6F).Of note, a rapid decline of T cell percentage and a rapid increase in the CD69 percentage in T cells were observed in the cynomolgus monkey treated with 3 mg/kg (Fig. 6E-F).Additionally, a sharp and transient increase in the expression of the T cell activation marker, CD25, was observed 24 h after the first and second doses of 1 mg/kg 1B7/CD3, with levels returning to baseline levels at 7 days post-dosing (Fig. 6G).While moderate increase 24 h after the first dose of 1B7/CD3 at 0.3 mg/kg, but not after the second dose (Fig. 6G).Lastly, and in line with clinical observations, serum samples from cynomolgus monkeys that received 3, 1, or 0.3 mg/kg 1B7/CD3 exhibited transient increases of IFNγ, IL-6, IL-8, IL-10, monocyte chemoattractant protein 1 (MCP-1), and TNFα cytokines at 4 h post initial dose and a return to basal levels at either 24 h or 7 days post dosing.No notable induction of cytokine levels was observed after second dosing, except for IL-8 (Fig. 6H). DISCUSSION To date, more than 100 different BsAb formats exist, with new constructs constantly emerging due to efforts to extend the half-life and optimize the balance between anti-tumor activity and drug safety [26][27][28]. Here, we present 1B7/CD3, a novel anti-TSLPR BsAb comprised of TSLPR 1B7 Fab and Xencor's anti-CD3-scFv.In vitro analysis demonstrated proper binding of the 1B7/CD3 to both human TSLPR and CD3 with high purity and feasible antibody production.Interestingly, while TSLPR is primarily expressed on monocytes and dendritic cells, as well as occasionally on lymphocytes [29,30], 1B7/CD3 induced only minimal activation of normal PBMC but strong antigen-specific T cell activation and potent tumor cell killing.Follow-up in vivo assessments found that 1B7/CD3 treatment markedly inhibited tumor growth in distinct CDX and PDX models across donors at doses that were tolerable to cynomolgus monkeys.More PBMC donors will be included in future studies to further address the donor to donor variation. Remarkable in vivo anti-ALL activity of 1B7/CD3 was associated with T cell expansion in the spleens and BM.We also showed 1B7/ CD3 induced phenotype shift from T EM not only to T EMRA but also to T SCM , which have the ability to self-renew and proliferate.The presence of mouse polymorphonuclear cells and megakaryocytes was observed in the BM from 1B7/CD3-treated mice by H&E analysis, which indicates that IB7/CD3 can selectively kill leukemia cells but spare normal hematopoietic cells.Together, our data demonstrated that the exceptional tumor growth inhibition of 1B7/CD3 therapy was likely mediated by T cell activation and expansion. The 1B7/CD3 BsAb has been designed to address the short halflife of blinatumomab, which is due to its lack of an Fc domain.Specifically, our 1B7/CD3 BsAb includes Fc to facilitate purification, enhance stability, and prolong the half-life by FcRn binding [31].Consequently, 1B7/CD3 maintains full-length antibody properties and a half-life of about 9-10 days in NSG mice, similar to that of a typical monoclonal antibody in humans [32].Follow-up efforts focused on assessing the possibility of TMDD in patients treated with 1B7/CD3 by using PBMC humanized NSG mice.A target sink effect has been reported in daratumumab, a CD38 monoclonal antibody, due to the abundant expression of CD38 in myeloma [33,34].Given that BsAbs bind two targets simultaneously, the TMDD effect may be exacerbated once both targets are present.Hence, the PK profile of 1B7/CD3 in NSG mice may not reflect the PK profile in patients since 1B7 does not cross binding to the mouse and NSG mice lack T cells.As expected, assessment of the PK profile of 1B7/CD3 in humanized NSG mice revealed a reduction of Cmax and half-life.Further, the concentration of 1B7/CD3 in the distribution phase was profoundly reduced in humanized mice when compared to that in wild-type mice, which was possibly caused by the CD3-mediated antibody sink.Consistently, this profound decline in 1B7/CD3 exposure was also observed in the cynomolgus monkey at 24 h post-doses.However, it is important to note that the half-life of 1B7/CD3 is still significantly longer than that of blinatumomab, and the serum Fig. 3 In vivo activity of 1B7/CD3 bispecific antibody (BsAb) in the REH-TSLPR cell line derived xenograft model.A Schematic of experimental design.Female NSG mice at 6-10 weeks old received a tail vein injection of 0.1 M REH-TSLPR human acute lymphoblastic leukemia (ALL) cells per mouse.Two weeks after tumor inoculation, 10 M PBMC/mouse were intravenously injected for humanization.Three days after the PBMC injection, mice were randomized into control and treatment groups.Mice were either treated with vehicle control or 1B7/ CD3 at different doses at indicated time points.Tumor burden was assessed by imaging using IVIS (PerkinElmer).B Imaging (top) and radiance plot (bottom) monitoring tumor growth in donor 296 humanized mice (Donor PBMC-296) treated with control (PBS) or 1B7/CD3 (1 mg/kg) at indicated time points.Data are shown as mean ± SD.C Imaging (top) and radiance plot (bottom) monitoring tumor growth in donor 879 humanized mice (Donor PBMC-879) treated with control (PBS) or 1B7/CD3 at doses of 1, 0.1, 0.05, and 0.01 mg/kg at indicated time points.Data are shown as mean ± SD.D Tumor burden in the bone marrow (BM) from donor 879 humanized mice.BM was harvested 24 h.after the last dosing.Data are shown as mean ± SD; *p < 0.05 compared with control.E Percentage of CD3 + T cells in the blood from donor 879 humanized mice collected at 24 h.post second dosing.Data was analyzed by GraphPad Prism 9 and shown as mean ± SD; **p < 0.01 compared with control.F Dynamic shifting of the T cell phenotype as donor 879 humanized mice receive repeated dosing of 1B7/CD3 treatment or control.Blood was collected at week two and week three post treatment.T SCM , stem memory T cells (CD45RO -CD62L + ); T CM , memory T cells (CD45RO + CD62L) + ; T EM , effector memory T cells (CD45RO + CD62L -); T EMRA , effector memory cells re-expressing CD45RA T cells (CD45RA + CD62L -).exposure of 1B7/CD3 at 18 days post dosing is higher than the EC 50 in in vitro killing assay.Dosing-limiting toxicities have been observed in the majority of CD3-redirecting BsAbs, including blinatumomab, in both patients and preclinical models, with the most common AEs associated with cytokine release syndrome (CRS) and neurotoxicity [35,36].Cynomolgus monkey is an appropriate model for assessing 1B7/ CD3 toxicity, as 1B7/CD3 binds to human and cynomolgus TSLPR and CD3 with comparable affinities.In cynomolgus monkey, we found that a high dose (3 mg/kg) of 1B7/CD3 induced AEs such as progressive vomiting and lethargy.However, medium (1 mg/kg) and low (0.3 mg/kg) doses of 1B7/CD3 were well-tolerated with only mild and transitory BsAb-related toxicity.Importantly, there is appropriate antibody exposure over the period of toxicity assessment.Administration of 1B7/CD3 induced a dosedependent depletion of lymphocytes and monocytes at 4 h after the first dose, likely caused by activation-induced T cell death [37], with a full or partial recovery at 24 h or 7 days after the first dose.We did not observe a reduction in leukocytes after the second dose in cynomolgus monkeys, which is consistent with reports from other BsAbs.Interestingly, we observed a marked elevation of cytokines after the first dosing, but not after the second dose, with the exception for IL-8.A transient but strong elevation of pleiotropic cytokines within minutes to hours after an infusion of CD3-redirecting BsAbs has been previously observed, as well as shown to be likely mediated by helper T cells and macrophages [38,39].Further, a strong elevation of cytokines, primarily TNF-a, interferon (IFN)-g, IL-1b, IL-2, IL-6, IL-8, MCP-1, and IL-10, have been implicated in the pathogenesis of CRS [40,41].Together, our assessment of 1B7/CD3 toxicity in cynomolgus monkeys strongly indicated that the maximum tolerated dose of our CD3-redirecting BsAb is 1 mg/kg.However, more animals will be needed in future GLP toxicity study. Collectively, our preclinical studies demonstrate potent anti-B-ALL activity of 1B7/CD3 in vitro and in vivo at doses that are tolerable in cynomolgus monkeys.These findings provide the proof of concept for evaluating BsAb-targeting TSLPR as a potential therapy for improving the outcome for patients with CRLF2/TSLPR-overexpressing Ph-like B-ALL. Fig. 1 Fig. 1 Development and characteristics of the 1B7/CD3 bispecific antibody (BsAb).A Octet and cell surface binding of the top clone 1B7 with TSLPR-His (lab prepared monomer structure confirmed), human TSLPR (huTSLPR, and cynomolgus monkey TSLPR (cynoTSLPR) (B) Design of both TSLPR-VH-Hc mutant and anti-CD3e-VH-Hc mutant (scFv) vectors in pcDNA3.1 for bispecific monoclonal antibody expression.C Left, cartoon of the 1B7/CD3 BsAb with a non-functional Fc region; Middle and right panel, 1B7/CD3 BsAb was tested for binding to either antigen TSLPR or CD3e.D The stability of 1B7/CD3 BsAb was tested by ultra-high performance liquid chromatography-size exclusion chromatography (UHPLC-SEC) at indicated time points and temperatures.E Tandem binding of 1B7/CD3 BsAb to both TSLPR and CD3e antigens was assayed by Bio-Layer Interferometry.1B7 alone served as the positive control for TSLPR binding, and negative control for CD3e binding.A BSA loaded sensor with TSLPR and CD3e served as the negative control for both antigen binding. et al. Fig. 4 Fig. 5 Fig. 6 Fig. 4 Anti-tumor activity of 1B7/CD3 treatment in the Bos-1 ALL patient-derived xenograft (PDX) model across donors.Three million Bos-1(CD19 + TSLPR + ) ALL PDX cells were I.V. injected into NSG mice, and Bos-1 engraftment were assessed by analyzing the percentage of ALL cells (hCD45 + ) in peripheral blood with flow cytometry.Humanization and treatment are same as in Fig. 3A.Leukemia burden was assessed as percentage of hCD45 + / TSLPR + cells in single live cell population after humanization.A TSLPR antibody (1B4, BioLegend), non-competing with 1B7/CD3, was used for staining.A, B Tumor burden, in the blood, spleen, and bone marrow (BM) in (A) donor 076 (PBMC 076) and (B) 875 (PBMC 875) humanized mice treated with 0.1 or 1 mg/kg 1B7/CD3 or control (PBS).Spleen and BM were harvested 3 days post last dosing.Data are shown as mean ± SD; ***p < 0.001, **p < 0.01, *p < 0.05 compared with control.C Percentage of CD69 in CD3 T cells in the blood of donor 076 humanized mice (PBMC 076) treated with 0.1 or 1 mg/kg 1B7/CD3 or control (PBS) at week 2 post dosing.Data are shown as mean ± SD; **p < 0.01 compared with control.D Dynamic shifting of the T cell phenotype in the blood and spleen in donor 076 humanized mice (PBMC-076) treated with 1 mg/kg 1B7/CD3 or control (PBS).T SCM , stem memory T cells; T CM , memory T cells; T EM , effector memory T cells; T EMRA , effector memory cells re-expressing CD45RA T cells.E Percentage of CD3 + T cells in the blood, spleen, and BM in donor 076 humanized mice (PBMC-076) treated with 0.1 or 1 mg/kg 1B7/CD3 or control (PBS).Blood samples were collected at indicated times; spleen and BM samples were collected at 3 days post last dosing.Data are shown as mean ± SD; ***p < 0.001 compared with control.F Immunohistochemistry (IHC) analysis of CD3 T cells in the spleen and BM in donor 076 humanized mice treated with 0.1 or 1 mg/kg 1B7/CD3 or control (PBS) at 3 days post last dose.G HLA-A IHC analysis (upper panel) and hematoxylin & eosin staining in BM (lower panel) in donor 076 humanized mice at 3 days post last dose.Yellow circles indicate mouse polymorphonuclear cells; red arrows indicate mouse megakaryocytes.
7,147
2023-08-26T00:00:00.000
[ "Biology", "Medicine" ]
Metacognitive knowledge and regulation of peer coaches Peer coaches are undergraduate peer educators who help facilitate learning in introductory STEM classes, either as learning assistants or peer-led team learning leaders. Peer coaches’ facilitation is generally focused on specific content knowledge, but their pedagogical skills could be applied to other content, such as metacognition. Metacognition, an individual’s awareness and management of their own thinking and reasoning, is an important skill for undergraduate students to learn, though these practices rarely receive the explicit focus required for their development. Peer coaches could act as facilitators of metacognitive practices with their introductory STEM students. As a first step to investigating this potential role, we collected and analyzed written artifacts from the peer coaches’ pedagogical training course, looking for evidence of metacognitive competence. We found that coaches had competence in metacognition both as a learner and as a coach, and that these two perspectives informed each other in productive ways. I. INTRODUCTION The ever-present call for better learning environments and educational gains for undergraduate STEM students has led to the development of different types of undergraduatefacilitated peer learning.Peer coach facilitators are advanced undergraduates who support active learning in introductory courses.There are several kinds of peer coaches, including learning assistants (LA's) who facilitate learning during classtime alongside professors [1], and peer-led team learning (PLTL) leaders who facilitate learning in small-group sessions outside of normal class hours [2].These two models have similar goals and outcomes.Peer coaches improve student learning outcomes and interest in their specific topics [1][2][3][4].Other benefits to peer coaching include stronger scientific subject identity in their students and greater retention in science and education majors [1][2][3][4][5][6].Peer coaches receive training from a semester-long pedagogy course and weekly preparation meetings with faculty members and senior leaders, ensuring they are prepared to support their students' learning, problem-solving, and group work. Learning about metacognition is a recommended part of peer coach pedagogical training [7,8].Metacognition describes knowledge and awareness of one's own cognitive processes [9], a definition that has expanded in recent literature to also include knowledge of specific problem-solving practices and the ability to discern their proper use [10].Implementing metacognitive practices has been shown to help students by developing higher-order thinking skills, scaffolding of self-regulation, and bolstering motivation for future learning [11,12], which improves final grades for students [13]. Despite the learning benefits of metacognition, and in contrast to the peer coach pedagogy class, explicit instruction of its practices to students in STEM classroom settings is rare [14,15].In the effort to improve student metacognitive practice, we investigated if instructors could productively enlist the help of peer coaches.That is, can peer coaches be effective metacognitive coaches? Previous studies have investigated similar questions.Lutz and Rios [16] explored LAs' epistemological growth toward seeing knowledge as co-constructed, which is a foundation for productive metacognitive practices.Another study investigated student metacognitive outcomes when peer tutors were enlisted to support metacognitive growth [17].Our work will focus on the peer coaches' preparedness to act as metacognitive coaches, which has not yet been explored. II. THEORETICAL FRAMEWORK The framework of STEM teacher knowledge guides our thinking about what it would mean for peer coaches to be metacognitive coaches.This framework posits three facets of teaching knowledge: pedagogical knowledge, content knowledge, and pedagogical content knowledge (PCK) [18,19].The latter is an instructor's specialized knowledge of both content and educational strategies that work best for teaching their specific subject.Previous studies on LAs [20] and PLTL leaders [21] show evidence that peer coaches develop general pedagogical knowledge as part of their facilitation experience.This study focuses on peer coaches' content knowledge, where the specific content is metacognition.Peer coaches' PCK about metacognition will be left to future work. Content competence related to metacognition is conceptualized as composed of two subcategories: metacognitive knowledge and metacognitive regulation [10].Metacognitive knowledge consists of 1) declarative knowledge about the concrete concepts and practices one knows, 2) conditional knowledge about when and why specific practices should be used, and 3) procedural knowledge about how to implement and manage these practices [10].Metacognitive regulation consists of 1) planning done prior to cognition in the selection of practices, 2) monitoring one's awareness and comprehension during a cognitive task's completion, and 3) evaluating one's processes and products after cognition for potential improvements [10].Peer coaches can engage in all of these facets in their roles as learners and coaches.We define metacognitive competence as having both metacognitive knowledge and metacognitive regulation. Our research questions for this preliminary study are: 1) What metacognitive knowledge and regulation do peer coaches exhibit in their role as learners?2) What metacognitive knowledge and regulation do peer coaches exhibit in their role as coaches?and 3) How does metacognitive competence in one role inform competence in the other role? III. METHODOLOGY AND DATA COLLECTION This study took place at the University of New Hampshire, where the peer coach program serves the introductory STEM courses, with LAs working in physics and mathematics and PLTL leaders working in biology, chemistry, and neuroscience.For their first semester in the program, all peer coaches take the same one-credit pedagogical training course, which focuses on the pedagogical strategies and metacognition they will practice in their new role.The first author, a graduate student of the institution, helped modify and deploy the course and associated materials.The second author is one of the two professors who taught this course over the last 15 years. In the course there were two class meetings focused on metacognition.The first set of readings [23][24][25] describe in detail specific implementations of metacognitive practices.Students read one of these papers and then shared their insights through a jigsaw [26] activity with their peers.The second week's reading was a compilation of cognitive science research on ten easily implemented metacognitive strategies [22].In class, the peer coaches discussed each strategy, including when and why it should work. We collected all written assignments from the pedagogy course from those peer coaches who consented to partici- pate in the research, for a total of four semesters beginning in Fall 2021.We focused our analysis on those artifacts that were tied to the concepts of metacognitive practices.Table I provides a description of these artifacts and the associated prompts given to peer coaches to generate them.For this study, we selected two peer coaches to serve as case studies, so that we could look in detail at individual coaches using all the data in Table I.Both coaches attended the pedagogical training course during the Fall 2022 semester and were seen as typical of the population because they showed metacognitive competence both as learners and coaches, with some overlap of these two roles.The two coaches we selected for further analysis will be referred to as Ava, an LA, and Stephanie, a PLTL leader.Our initial analysis, which was conducted in the fall of 2022, focused on the synthesis papers written by the peer coaches during the prior semesters, as these were generally the most detailed and thoughtful writings.We began with a set of a priori codes from the literature about metacognitive competence of learners (declarative, conditional, and procedural knowledge, planning, monitoring, and evaluative regulation).As we coded collaboratively, we developed inductive codes that captured themes not present in the a priori codes.We found that the coaches were frequently being metacognitive in their role as coaches in ways that were distinctly different from metacognition as a learner, so we created specific codes for these roles.Furthermore, coaches sometimes reflected on how regulation in one role informed their actions in the other role: we created a code for this named "interplay".Lastly, we realized that the initial categories were too fine grained for our needs, and focused simply on the broader categories of metacognitive knowledge and regulation. For our analysis, we applied the a priori and inductive codes used in the initial analysis to the written artifacts collected from Ava and Stephanie, with a focus on finding instances of metacognitive competence. IV. ANALYSIS Through our analysis, we categorized instances of peer coaches' metacognitive competence around five key features.These are metacognitive knowledge and metacognitive regulation, each from both the perspective of a learner and the perspective of a coach, along with the interplay between these two perspectives. A. Learner Metacognitive Knowledge We define learner metacognitive knowledge as a peer coach's knowledge of learning strategies that utilize higherorder thinking skills, including how to use these strategies themselves, when to use them optimally in their learning, and why they are effective for their learning.Peer coaches' learner metacognitive knowledge can come from their own experience as learners and their pedagogical training.We have seen evidence of this throughout the written artifacts collected, with both Ava and Stephanie making mention of multiple study strategies they utilize when studying for their own classes: • Group Work • Practice Testing • Flashcards • Vocalization • Self-Explanation (how is new information related to old information, explaining steps in problem-solving) • Summarization • Rereading/rewriting notes • Chunking (breaking down difficult tasks to smaller tasks) • Imagery to Text These strategies are detailed in two of their readings for the pedagogy course [22,25].The two peer coaches demon-strate their knowledge of many strategies, going significantly beyond the common strategies of re-reading and highlighting [22], suggesting a strong foundation for metacognitive competence. The peer coaches show further learner metacognitive knowledge in their understanding of the use and effectiveness of these study strategies.For example, Stephanie responded to a metacognitive reading [22] with the following: Something new I learned from this reading was the process of learning and how rereading something over and over again is not the correct way of learning, I found this very important because that's how i've been studying for my exams by cramming in the material by rereading lecture slides in which did not help me in the long run. Stephanie's learner metacognitive knowledge allows her to recognize strategies like rereading and cramming, as well as judge them for her own learning and adjust accordingly.She demonstrates the active processes of utilizing content knowledge of metacognitive regulation, which is indicative of metacognitive competence. B. Coach Metacognitive Knowledge We define coach metacognitive knowledge as a peer coach's knowledge of classroom facilitation strategies and pedagogy, which includes what strategies are effective for a specific topic, when students can best utilize them, why such strategies are effective in different contexts, and the peer coach's individual philosophies about facilitation.Both Ava and Stephanie wrote about metacognitive practices they have seen within the context of coaching, including: • Group Work • Practice Problems • Elaborative Interrogation (explaining why something is true) • Chunking • Visualization This list shows the familiarity that peer coaches have with metacognitive practices that occurred during coaching.These discussed strategies were either observed in students' behavior or introduced by the peer coaches.Their discussion of these strategies demonstrates their ability to recognize practices regardless of origin, and therefore demonstrates their coach metacognitive knowledge. We see additional evidence of coach metacognitive knowledge as our peer coaches discuss the implementation of these known strategies in their coaching sessions.For example, when speaking on the decision to include more visual components to her problems, Stephanie writes: [H]aving something visual to plan out and followed really helped my students who like to have something to look at while solving a problem and if they did encountered something similar on a homework problem or a question on an exam they can visualize back to our activity and remember the steps on how to tackle it. Here, Stephanie shows her coach metacognitive knowledge by drawing a connection between the learning strategies she can facilitate in her sessions and their benefits to her students.Peer coaches demonstrate metacognitive competence through not just knowledge of metacognitive practices, but of the pedagogical implications during the coaching experience. C. Learner Metacognitive Regulation Learner metacognitive regulation is a peer coach's active thoughts and behaviors relative to learning and performance on cognitive tasks.This includes planning with consideration for familiarity and interest in the topic, monitoring for actions and behaviors that promote or deter learning, and evaluating work from the context of a course and one's goals.Learner metacognitive regulation can be seen when peer coaches reflect on their active learning behaviors over a semester.For example, when Ava talks about her use of practice problems to study for her introductory physics course, she says, Not only was it easier to memorise the formulae and apply them, but also the theory started to make more sense.I consciously understood the best study strategy for me and worked on areas that I found difficult. Here, Ava engages in learner metacognitive regulation by actively monitoring her progress in a new study strategy, then evaluating its effectiveness so that it can be better tuned to her needs as a learner. D. Coach Metacognitive Regulation Our definition of coach metacognitive regulation is a peer coach's active facilitation strategies and behaviors.This includes planning instruction around both content and pedagogy, monitoring their facilitation's effectiveness through the responses from student learning, and evaluating the impact of their facilitation strategies based both on student assessment and personal reflection.We found evidence of coach metacognitive regulation in the peer coaches' work with their students, such as when Stephanie responded to a reading assignment focused on cooperative learning: Group effort, there is a saying that a group is as strong as the weakest link.I'm not saying that a person is dragging the group down but they should help a group member who is having a hard time with the topic or subject and from that rewards the student with extra credit and from there reflects their own individual goals and I see that a lot with my own pltl group because when I split them into groups to work with each other they help each other out it one student doesn't understand something they try to help the student understand. This excerpt demonstrates Stephanie's coach metacognitive regulation: she monitors how her students' learning aligns with both the teaching strategies she implements and her own attitudes towards them.Her regulation of her facilitation happens during the students' work in her PLTL sessions and before in her preparations for the session's structure.Stephanie actively and continually take steps to bolster student learning outcomes, a direct result of her metacognitive competence. E. Interplay of Perspectives While peer coaches have been shown to act as either a learner or a coach within a given situation, their positioning between these two perspectives means that they can overlap, resulting in beliefs and attitudes unique to both.We refer to this as interplay of perspectives, which we define as the influence that one perspective of a peer coach has on another perspective.We see evidence of this when peer coaches discuss how their implementation of coaching strategies impacted their own learning.For example, when speaking on the benefits of group work, Ava wrote, As I watched my students work together and facilitated their group work, I realized that even the student at the top was able to learn from the experience.By helping their group members, they were able to articulate their thoughts and solidify their own understanding of the material being taught.As a result, not only did I push my students towards working in groups, I tried to implement the strategy into my own study methods. Here, we see Ava reflecting on the benefits provided to her own students by using the learning strategy of group work and deciding to use it herself as a learner.This is more than simply an instance of "practice what you preach," as Ava engages in metacognition to analyze the positive outcomes seen in coaching and understand how she, as a learner, could improve her learning with them. We also see interplay of perspectives in the reverse situation, where peer coaches talk about how their experiences as learners affected their coaching strategies.This can be seen when Stephanie, reflecting on an assigned reading outlining specific metacognitive practices [22], writes, [T]his paper was very useful and helpful in the way I view my own study strategies but also gave me insight in how many different ways my students study for their own exams as well. From this quote, it seems possible that because Stephanie saw the value of the strategies for her own learning, she was more likely to extend the value of those strategies to her students.A peer coach's ability to metacognitively reflect on their own learning can provide great insight into the practices that might benefit their students, enabling a stronger coaching experience. V. CONCLUSIONS AND DISCUSSION This pilot study shows that our two peer coaches demonstrate several aspects of metacognitive competence.This includes metacognitive knowledge and metacognitive regulation, both from the perspective as a learner and as a coach, along with the interplay of these two roles.Evidence of these facets of metacognitive competence was seen in the written artifacts from their pedagogical training course, where they spoke on their experiences from both roles over their first semester. We have seen similar evidence in other peer coach reflections as well.In future work, we will analyze the remainder of the data set (N=127) to look for further evidence of metacognitive competence.We will also focus on the uniquely powerful interplay of perspectives, and seek evidence that this interplay allows for peer coaches to experience increased metacognitive growth compared to an individual holding only one such role. Building from this work and the literature, we will go beyond the current research to construct an fuller operational definition of a metacognitive coach (which will include metacognitive competence and interplay as core components of facilitating metacognition) and analyze the data from the full set of students to investigate to what extent our peer coaches are metacognitive coaches.Other characteristics which we will investigate include the peer coaches' ability to be explicit about their metacognitive competence and to tie the formal readings to their own experiences; both of these characteristics indicate deep understanding [27,28].We have also seen hints of pedagogical content knowledge [19] about teaching metacognition.For example, some peer coaches show an awareness of unproductive but common metacognitive practices, and others show a belief that metacognition should be explicitly taught. The end goal of our work is to inform classroom practices both in the pedagogy course and in the peer coach interactions with students.Future work could investigate which readings and reflection prompts in the pedagogy course are effective for developing productive coach practices and peer coaches' deployment of metacognitive interventions with their students. TABLE I . Description of written artifacts collected from peer coaches' pedagogical training.A status report on your progress as a facilitative leader.Speak particularly to how you may have change or evolved in your actions or thinking.In particular, I want you to reach to incorporate ideas from any theories or research on student learning that we read about or discussed.2)How participating in this course and having this experience has affected your own learning in other courses.This is about YOU as a student, NOT in this pedagogy course Reflection on Metacognitive Q1: Which reading did you do?Readings Q2: What was something new in this reading for you, and why did you find this interesting or important?Q3: Give details about how some part of this reading connects to your previous beliefs and/or experiences.Q4: Ask an "I wonder" question.Self-report of Effective Study Q1: What is one study strategy that works well for you and how/why does it help you?Strategies Q2: What is one study strategy that hasn't worked for you and how/why didn't it help?Q3: How do you know when you know something?For example, when do you feel confident that you understand a metabolic pathway, or a chemical reaction, or a mathematical proof, or a physics problem solving strategy?
4,526.2
2023-10-15T00:00:00.000
[ "Psychology", "Education", "Physics" ]
Eco-Efficiency Evaluation of Integrated Transportation Hub Using Super-Efficiency EBM Model and Tobit Regressive Analysis – Case Study in China The transportation industry is a key area for ecological civilisation construction and low-carbon development. As the core support of the national integrated transportation system, the ecological development level of integrated transportation hub (ITH) is crucial for enhancing the sustainable development capacity of the national integrated transportation. An eco-efficiency evaluation index system of ITH is established in this study and the eco-efficiencies of twenty international ITHs in China are comprehensively evaluated based on the super-effi-cient epsilon-based measure (EBM) model. Then the panel Tobit regression model is adopted to analyse the influencing factors of eco-efficiency. The results show that the average eco-efficiency of ITHs in China during 2011–2021 declines first and then rises, with a relatively high level overall but not efficient yet, and there is an obvious gradient distribution characteristic in all eco-efficiencies. Among them, Guangzhou ranks first, followed by Haikou, and Harbin ranks last. It is found that integrated transportation efficiency, urban green coverage, level of opening-up and economic development improve eco-efficiency significantly, while urbanisation rate, industrial structure and technology input have a negative impact. The re-sults are consistent with the actual situation, verifying the practicality of models, and can be used to promote the sustainable development of integrated transportation. INTRODUCTION Currently, the global climate crisis caused by greenhouse gas emissions is becoming increasingly severe and has attracted widespread attention from the international community.Ecological transportation is a strong support and guarantee for the sustainable development of the economy and society, and also an important field for achieving "dual-carbon" strategic goals.Compared with traditional transportation, ecological transportation places more emphasis on the coordinated development of transportation, nature, society and economy.Therefore, the development of ecological transportation is an inevitable choice for China to respond to climate change and promote harmony between humans and nature within the framework of sustainable development.Over the years, China has been committed to building a green and low-carbon national integrated transportation system where transportation construction coexists with social environment.As the core foundation of the national integrated transportation system, the ecological development level of integrated transportation hub (ITH) directly affects the green development quality of the entire system, which is crucial to enhance the sustainable development capacity of national integrated transportation.So, what is the current situation and trend of ecological development in terms of China's ITHs?What are the effects of driving factors?These issues are worth exploring in depth.Therefore, timely assessment of the eco-efficiency and characteristics of China's ITHs under resource and environmental constraints, revealing the factors influencing on the eco-efficiency of ITHs, is of great significance for clarifying the problems in the sustainable development process of the current integrated transportation system and alleviating ecological pressure. Transportation efficiency is the ratio between effective output and resource input in transportation activities, reflecting the operation status and development potential of the transportation system [1].The level of transportation efficiency is not only related to whether resources and energy can be efficiently utilised but also to whether the entire transportation system can achieve sustainable development [2].The study of transportation efficiency began in the 1970s, with researchers initially measuring and analysing the efficiency of urban public transportation [3].Subsequently, the research scope expanded to include other modes of transportation such as railway [4], aviation [5], waterway [6] and highway [7].In the 1980s, some countries, including the United States, France and the United Kingdom, began to focus on stock optimisation in transportation infrastructure construction, improving the integrated transportation system through the integration-substitution-expansion of single transportation modes [8].Relevant studies mainly focused on policy adjustments and technology integration relating to the relationship between the "new" and the "old" [9,10].In 1991, the Intermodal Surface Transportation Efficiency Act (ISTEA) in the United States defined integrated transportation efficiency as "maximizing the benefits of transportation efficiency based on existing infrastructure to meet the needs of socio-economic development" [11].Since then, integrated transportation and its efficiency issues have gradually attracted the attention of government departments and scholars in various countries.This marks a shift from single transportation mode to integrated planning layout.In the early research on integrated transportation efficiency, the focus was mostly on measuring urban integrated transportation [12].Based on this, scholars further explored the efficiency of integrated transportation at the regional level [13,14] or national level [15]. To meet the research needs of multi-level transportation efficiency issues, diversified measurement and evaluation methods have emerged, and related studies have shown two development trends.The first trend is that frontier efficiency analysis is mainly used as a measurement model, and the parameter frontier analysis method is gradually being replaced by non-parametric frontier analysis method.A few researchers have chosen stochastic frontier analysis (SFA) from the former to measure transportation efficiency [16]; the data envelopment analysis (DEA) in the latter is the most popular model, which is widely accepted due to its advantage in dealing with multiple inputs and outputs.So far, the study on measuring transportation efficiency using the traditional DEA method is quite mature [17][18][19].In order to improve the accuracy and practicality of measurement results, more and more scholars have applied the DEA advanced models, such as multi-stage DEA model [20], slacks-based measure (SBM) model [21][22][23], epsilon-based measure (EBM) model [24], etc.The second trend is the multiple selection of input-output indicators for measurement.It has evolved from considering only desirable outputs to considering both desirable outputs and undesirable outputs, introducing transportation carbon emissions [25] or other social and environmental indicators [26] as undesirable outputs.Especially in recent years, with the increasing prominence of environmental issues and the integration of sustainable development concepts into various fields, the academic community has gradually focused on the sustainable development capabilities of transportation, and has begun to widely use models that take into account the undesirable factors to study the ecological efficiency, energy efficiency and carbon dioxide emission efficiency of integrated transportation in various countries.Leal et al. [27] conducted DEA analysis on the eco-efficiency of transportation sectors in Brazil.Egilmez et al. [28] assessed the efficiency of carbon emissions and energy consumption in transportation process of the manufacturing industry of the United States.Lyovin et al. [29] discussed the evaluation criteria for the energy efficiency of Russia's integrated transportation system.Ma et al. [30,31] measured the green efficiency of integrated transportation of 30 provinces in China.Hussain et al. used the SBM model and windows analysis to estimate the sustainable transport efficiency of 35 OECD countries, indicating that socioeconomic factors have a remarkable impact on sustainable transport efficiency [32], and they also verified that transport-related climate change Promet -Traffic&Transportation. 2024;36(2):307-325.mitigation technology has a remarkable impact on efficiency levels [33].Akbar et al. [34] employed the SBM model for bad output to assess the transport energy efficiency of 19 Belt and Road countries. Generally, studies on the theory and methods of integrated transportation eco-efficiency have generally made substantial achievements, which can provide available references for the present paper.However, some deficiencies still exist for further research, such as the following aspects.(1) In terms of research level, existing literature only evaluates eco-efficiency of a country or region's integrated transportation sector at the macro level, but has not conducted research on cities at the micro level.In fact, different cities have differences in the ecological space organisation of integrated transportation.Especially for cities located at different hub positions, due to the significant personalised differences in their transportation characteristics, the ecological and environmental problems generated are also different.Therefore, when evaluating the eco-efficiency of different level hubs (i.e.cities), the corresponding indicator systems should be set up to identify the root causes of their respective problems and provide solutions according to the situation.(2) In terms of research methods, most literature uses the DEA method for quantitative measurement, but this method does not include relaxation variables in the measurement of inefficiency, and does not consider the "undesirable" output and overestimates the actual efficiency value.To improve measurement accuracy, many scholars have adopted the undesirable output SBM model based on relaxed variables to calculate efficiency values, but this model cannot handle situations where input and output variables have both radial and non-radial characteristics.The EBM model considers both desirable and undesirable output scenarios and is compatible with both radial and non-radial slack variables.Combining the model with the super-efficiency model can further distinguish effective decision-making units with the efficiency value of 1, which can accurately measure the level of eco-efficiency.However, there is currently no literature on using the super-efficient EBM model to measure the eco-efficiency of integrated transportation hubs, and there is even no research on analysing its spatiotemporal evolution patterns and influencing factors based on it. In view of the above analysis, the purpose of the study is to analyse the eco-efficiency development characteristics and influencing factors of ITHs at the urban level, and explore the overall development level of a country's integrated transportation hubs, further providing decision-making support for achieving sustainable development of national integrated transportation system.Specifically, the research process is to first use the decision-making trial and evaluation laboratory (DEMATEL) method to construct an indicator system that conforms to the characteristics of ITH.Secondly, a super-efficient EBM model considering undesirable outputs is used to calculate the eco-efficiency of twenty international ITHs in China.Then, the kernel density estimation method and standard deviation ellipse method are used to analyse the spatiotemporal evolution characteristics of eco-efficiency.Finally, the panel Tobit model is used to reveal the main factors affecting the development of eco-efficiency. The rest of the paper is organised as follows: Section 2 introduces the concept and connotation of ITH and establishes an evaluation index system for the eco-efficiency of ITH.Section 3 explains the evaluation and analysis methods.Section 4 is the empirical results and discussion.Section 5 provides the conclusions. Definition of relevant concepts The transportation industry, as the fundamental industry of the national economy, is crucial for the development of the entire national economy and society.From the development practice of transportation industry in various countries, the development of integrated transportation system is a new trend and direction for modern transportation industry [35], and also a new model for the development of transportation industry in various countries around the world [36,37].The integrated transportation hub is the main body of the construction and development of integrated transportation system, and the spatial carrier for efficient connection and integrated organisation of various transportation modes.It plays an important role in promoting the integration of various transportation modes, adjusting transportation structures and promoting the construction of modern industrial systems.Regarding the conception of ITHs, there is currently no clear Promet -Traffic&Transportation. 2024;36(2):307-325.definition in academic fields, so in this paper ITHs are defined as node cities serving for the regional or national transportation networks, which are passenger and cargo transfer centres for integrated development of various modes of transportation with connectivity.Three main contents of integrated development are the coordinated development among all modes of transportation, the integration of the transportation industry with other industries and the overall coordination between integrated transportation hub and transportation corridor.ITHs can be classified into three types of international, national and regional hubs according to service scope and target, and each type of hub processes distinct transportation characteristics and functions.International ITHs focus on global connectivity and radiation levels, expanding diverse transportation network by land, sea and air, and serving as the international gateway.International port and station serve as the main operation location of international ITHs.Different hubs can build various ports and stations based on their featured modes of transportation, including international railway hubs and stations, international shipping hubs (port hubs) and international aviation hubs.With the rapid development of the economy, the scale of the integrated transportation system is constantly expanding, and the transportation network is also constantly improving.However, at the same time, the negative impact on resources, environment and other aspects is also becoming increasingly serious.The current complex and severe transportation problems indicate that the traditional demand-oriented integrated transportation development model is no longer able to meet the green development goals.Therefore, it is urgent to find ways to achieve sustainable transportation construction.More attention has been paid to ecological transportation, leading to a lively discussion among many scholars [38].Ecological transportation is an eco-friendly transportation system that is planned, constructed and managed following the principles of natural ecology, economy, ecology and human ecology.It represents an advanced stage in the development of integrated transportation system [39].Different from the focus of green transportation [40] and sustainable transportation [41], ecological transportation is an important branch of ecology [42], emphasising the importance of ecological environment.Ecological transportation takes a strong initiative in not only paying attention to the impact on the ecological environment but also being able to spontaneously balance the relationship with the ecosystem, and has the function of improving and optimising the ecological environment.The implementation of a sustainable integrated transportation system needs to fully consider the carrying capacity of resources and environment, and build a true ecological system based on the coordination of transportation infrastructure and ecological space.Thus, it is an inevitable trend for future integrated transportation systems to achieve ecological development.Also, the ITHs will inevitably evolve along the ecological direction as the constituent elements of the system.Eco-efficiency is an important indicator for evaluating the development level of ecological transportation, which was first proposed by Schaltegger and Stum [43].Subsequently, the World Organisation for Economic Co-operation and Development (OECD) further defined it as the efficiency of utilising ecological resources to meet human needs, striving to minimise its environmental impact while promoting economic development [44].Eco-efficiency provides new ideas for quantitatively analysing the input and output of economic development and ecological environment conditions, and measuring the synergistic development relationship between economic society and ecological environment.Abide by this idea, this paper defines the eco-efficiency of ITH as the degree to which certain costs are invested in the operation of ITH to meet integrated transportation needs as well as reducing environmental damage and resource consumption, within the framework of ecological transportation development.That is to say, the larger the transportation output of an ITH under the same input is and the smaller the impact on the environment and resources is, the higher eco-efficiency of the ITH becomes. Explanation of research objects This paper is to conduct a comprehensive evaluation of the eco-efficiency of ITHs in China.A total of 100 cities in China have been selected for the ITHs, with 20 cities positioned as international ITHs and 80 cities positioned as national ITHs, all of which have begun to demonstrate excellent hub function.Considering the differences of economic development and transportation level between each hub, only 20 international ITHs are selected for exploration and analysis in the paper, because they cover most of China's provinces and cities that play an important role in international transportation and foreign trade.They have relatively mature transportation development history and can precisely reflect the overall development features of China's ITHs.Meanwhile, these cities are also the areas with the most comprehensive collection of relevant data.The twenty international ITHs are Beijing, Tianjin, Shanghai, Nanjing, Hangzhou, Guangzhou, Shenzhen, Chengdu, Chongqing, Shenyang, Dalian, Harbin, Qingdao, Xiamen, Zhengzhou, Wuhan, Haikou, Kunming, Xi'an and Urumqi. Beijing, Shanghai, Guangzhou and Shenzhen have always held a leading position in the development of transportation hubs in China.Tianjin is the traffic throat of North China, with a transportation network extending in all directions.Nanjing and Hangzhou, located in Eastern China, are transportation centres in the Yangtze River Delta region.Chengdu, Chongqing and Kunming are important transportation portals in the southwest region, with unique geographical locations and transportation advantages.Shenyang, Dalian and Harbin are located in Northeast China and have built complete integrated transportation networks.Qingdao is a coastal city in Eastern China, where integrated transportation pattern of "sea, land, air and rail" is becoming increasingly perfected.Xiamen is an important sea-land-air hub port in the southeast region.Zhengzhou and Wuhan are traditional transportation hubs in Central China.Haikou is the centre of highway and railway network within Hainan Province in Southern China.Xi'an and Urumqi are the two most important transportation hubs in the northwest region, with complete railway, highway and aviation networks. Due to the relatively advanced development of integrated transportation in the above listed twenty hubs, it is feasible to better understand and learn from the overall eco-efficiency status of ITHs in China by analysing the eco-efficiency of each hub. Establishment of evaluation index system By comparing and analysing the existing literature about ecological transportation [45] and integrated transportation efficiency [13] evaluation index systems, and considering the characteristics of international ITH, a preliminary selection of fifteen evaluation indicators has been made from three aspects of economy, social and transportation (as seen in Table 1).Furthermore, the DEMATEL method is applied to identify key indicators.The DEMATEL method uses graph theory and matrix tools to analyse the logical relationships and direct influence relationships between the elements in the system [46].It calculates the influence degree of each factor on other factors and the affected degree of it by other factors, as well as the centrality and causality of each factor, to finally identify the main factors in the system.MATLAB software is used in this paper to code the calculation for the direct influence matrix of evaluation indicators, and the results are shown in Table 1. Centrality represents the position of a factor in the indicator system and the degree of its impact, which is obtained by adding the influence degree value of the factor to the affected degree value.The greater the centrality is, the more significant the role of the factor on eco-efficiency development of ITHs is.Causality represents the influenced degree of a factor on other factors, which is obtained by subtracting the affected degree value of the factor from the influence degree value.If the value of causality is greater than 0, it indicates that the factor has a significant impact on other factors and can be classified as a causal factor.If the degree of causality is less than 0, it indicates that the factor is greatly affected by other factors, and it can be classified as an outcome factor [46]. To visually compare the centrality and causality of factors, it is necessary to construct a centrality-causality quadrant diagram (as shown in Figure 1).This quadrant chart is divided into four quadrants with an average centrality value of 3.3 and a causal value of 0 as the centre, and with centrality and causality as the two dividing lines.Among them, the factors in the first quadrant have high centrality and high causality, so they can be taken as causal factors.The factors in the second quadrant have high causality but low centrality, showing that their importance is slightly less.The centrality and causality of factors in the third quadrant are both low, so they can be considered relatively unimportant.The factors in the fourth quadrant have high centrality and low causality, so they can be taken as outcome factors.Therefore, there is a tendency to choose indicators from the first and fourth quadrants. Figure 1 -Centrality and causality of all indicators be used as input indicators.Four indicators of F 6 , F 7 , F 9 and F 10 are located in the fourth quadrant with high centrality and low causality.They are outcome factors that reflect the results of eco-efficiency and can be used as output indicators.Therefore, nine input and output indicators are ultimately selected in the paper to measure the eco-efficiency of ITHs and establish an evaluation indicator system (as shown in Table 2). Data description The study area covers twenty ITHs in China from 2011 to 2021.Regarding the data source of indicators in Table 2, the data for each hub are mainly taken from "China Statistical Yearbook", "China Urban Statistical Yearbook", "China Energy Statistical Yearbook", as well as statistical bulletins on national economic and social development and relevant official websites of each city.For the four indicators of integrated transportation passenger and freight mileage, as well as total passenger and freight volume of integrated transportation, because there is no water transportation in some hubs, the corresponding indicator data of roads, railways and aviation are chosen for unifying the measurement calibre, and finally converted to the total volume of integrated transportation.The data of urban CO 2 emissions is sourced from the China Carbon Accounting Database, and based on the proportion of CO 2 emissions from urban transportation industry accounting for approximately 10% of urban CO 2 emissions [47], thus transportation CO 2 emissions of each city are calculated.Descriptive statistics of the data are presented in Table 3. Super-efficient EBM model The traditional DEA model is a radial model, which conducts the improvement of ineffective decision-making units (DMUs) based on the assumption that inputs or outputs proportionally change.However, when there is excessive inputs or insufficient outputs, that is, there is nonzero slack in inputs or outputs, the radial DEA model neglects to improve the slack, resulting in biased calculation results.In order to overcome the above explained problems, Tone [48] proposed the SBM model in 2001.This model improves the non-radial variation between inputs and outputs, by adding non-radial slack variables to avoid the assumption of proportional change, and incorporates undesirable output into the model, making the calculation results more appropriate.However, the SBM model misses the original proportion information of projection values on the efficiency frontier during the calculation process, which may cause distortion in results.Moreover, this method cannot handle situations with both radial and non-radial characteristics. The relationship between inputs and outputs in the production process of the transportation industry is relatively complex.On the one hand, inputs such as transportation capital and labour may not necessarily vary proportionally with outputs in reality, indicating a non-radial relationship between inputs and outputs.On the other hand, there is a radial relationship between transportation energy input and carbon output, that is, consuming a certain proportion of energy will produce the same proportion of undesirable outputs such as CO 2 .Therefore, the paper incorporates undesirable outputs into the calculation framework and uses the EBM model proposed by Tone et al. [49] to calculate the eco-efficiency of ITHs.The EBM model can effectively reflect the proportion information between objective values and actual values, and simultaneously handle the radial and non-radial slack changes between inputs and outputs, enhancing the relative comparability of DMUs.At present, it is gradually applied to efficiency measurement issues in various fields [50,51].It has been widely applied in studies on ecological efficiency, energy efficiency and other related areas.However, the conventional EBM model may encounter situations where the efficiency values of multiple DMUs are equal to 1, hence the super-efficiency EBM model is adopted to conduct further analysis of the differences between efficient evaluation units [52], in order to improve the evaluation accuracy. The EBM model comprehensively evaluates efficiency values from three aspects, i.e. inputs, desirable outputs and undesirable outputs.Therefore, three constraint conditions are established accordingly to ensure that by increasing or reducing slack variables, the actual level of inputs, desirable outputs and undesirable outputs can reach the level of those on the optimal frontier (i.e. the efficient level), that is, obtaining the maximum desirable outputs and the minimum undesirable outputs with the least inputs.The slack variable represents the difference between the current efficiency value and the efficiency value at the optimal frontier of a DMU.Among them, the current efficiency value is obtained by linearly combining all inputs (or desirable outputs, undesirable outputs); the optimal frontier efficiency value is the inputs (or desirable outputs, undesirable outputs) at the efficient level.The introduction of slack variables into the objective function can solve the inefficiency problem caused by the variable slackness, compared to the efficiency at the optimal frontier., , , , , , , , , , , , , , Suppose there are t DMUs expressed as DMU k (k=1,2,...,t).Each DMU k has m types of inputs x ik (i=1,2,...,m), n types of desirable outputs y rk (r=1,2,...,n) and q types of undesirable outputs b pk (p=1,2,...,q).So, the vectors for the inputs, desirable outputs and undesirable outputs can be represented as The judgment criteria of efficiency status are as follows.If the value of ρ * is less than 1, it indicates that DMU k is in an inefficient state; if the value of ρ * is greater than or equal to 1, it indicates that DMU k has reached an efficient state, and the larger the value of ρ * is, the higher the level of eco-efficiency is. Panel Tobit regressive model Due to the truncated nature of the calculated eco-efficiency values, the dependent variable is limited and exhibits a discrete distribution.In order to avoid estimation bias, the panel Tobit regressive model [53] is selected to analyse the factors influencing eco-efficiency of ITH.The model expression is as follows: where Y it represents the eco-efficiency of the ith hub in the tth year (i=1,2,3,…,20; t=1,2,3,…,11); x 1 ,x 2 ,…,x n represent influencing factors; β 1 ,β 1 ,…,β n represent the coefficients of influencing factors, reflecting the influence level of each influencing factor on the eco-efficiency of ITH; α i represents the intercept term, which accounts for the baseline level of eco-efficiency of each ITH; ε it represents the random error term, which captures the unobserved factors or measurement errors that affect the eco-efficiency of the ITH in the ith hub and the tth year. Measurement of eco-efficiency MAXDEA Ultra9 Software is applied to measure the specific results of the eco-efficiency of each ITH in Equation 1, and accordingly the average eco-efficiency value of all ITHs from 2011 to 2021 is ranked from high to low (as shown in Table 4). According to Table 4, the average eco-efficiency value of all ITHs from 2011 to 2021 is 0.964, indicating a relatively high level of ecological development of China's ITHs, but without reaching an optimal level.The average values of eco-efficiency of each hub ranged from 0.5 to 1.4, and that of nine hubs, including Guangzhou, Haikou, Beijing, Shenzhen, Shanghai, Chongqing, Kunming, Chengdu and Wuhan, have greater than 1, illustrating that these hubs have achieved an ideal input-output ratio in terms of transportation resources.On the other hand, Tianjin, Hangzhou, Nanjing, Dalian and Harbin have relatively lower levels, indicating that there is redundancy or insufficiency in the input and output of transportation resources, requiring the adjustments to ensure efficient utilisation of resources in these hubs. Generally, the eco-efficiency of China's ITHs has showed a trend of initially decreasing and then increasing over the 11-year period.The average eco-efficiency went down year by year from 2012 to 2016 and decreased to the minimum of 0.837 in 2016.The reason is that since 2012, China has stepped into a new stage of accelerating the construction of a modern integrated transportation system, with significant advancements in transportation infrastructure such as railway, highway and civil aviation.However, this construction process inevitably led to increased consumption of natural resources and pollution of ecological environment, resulting in a significant decline in eco-efficiency.The eco-efficiency value fluctuated and increased from 2017 to 2021, reaching the maximum of 1.043 in 2021.The reason is that the Ministry of Transport of China formulated the strategy of green, circular and low-carbon development for the first time in 2013 and subsequently introduced a series of regulations, policies and standards aimed at promoting green development in the transportation industry comprehensively and nationwide.All hubs actively carried out energy-saving and emission reduction activities in transportation and achieved phased results in the following years, leading to an improvement in the eco-efficiency of hubs.Therefore, the calculated results of the model are consistent with the actual situation and the conclusion is reliable. Analysis of eco-efficiency spatiotemporal evolution Kernel density estimation is a non-parametric estimation method that transforms data of random variables into the form of a density curve.It can visually display overall information such as the number, position, height and curve tail of peaks [54].A three-dimensional kernel density diagram is mapped by MATLAB to intuitively demonstrate the temporal characteristics of eco-efficiency of all ITHs (see Figure 2). Figure 2 -Three-dimensional eco-efficiency kernel density curve of ITHs in China In 2011-2021, the eco-efficiency kernel density curve of China's ITHs shifted leftward and then rightward, indicating that the overall eco-efficiency of China's ITHs showed a trend of first decreasing then increasing.The kernel density curve shows a single-peak distribution with no polarisation phenomenon among all eco-efficiencies.Taking 2017 as the dividing point, the height of the peak has declined and the peak width has become larger before 2017, representing that the eco-efficiency differences among the ITHs have become larger.After 2017, the peak height rose and the width narrowed, showing that the degree of difference in eco-efficiency has decreased and the spatial imbalance has improved.In recent years, due to resource competition among hubs with similar development levels, there is an obvious gradient feature in the eco-efficiency of China's ITHs.By observing the tail performance of each year, it can be seen that there is a clear rightward tail, indicating that the number of hubs with high eco-efficiency values is gradually increasing.Therefore, the eco-efficiency of China's ITHs exhibits an obvious "buckets effect" and the focus of future improvement should lie in the hubs with low eco-efficiency. The standard deviation ellipse method is one of the classical methods for analysing the directional characteristics of spatial distribution.The size of the ellipse reflects the concentration level of the overall spatial pattern and the inclination angle (long axis) reflects the dominant direction of the pattern [55].The standard deviation ellipse method is used in this study to analyse the spatial distribution pattern and transfer characteristics of eco-efficiency of China's ITHs.The spatial transfer map of eco-efficiency is shown as Figure 3. The relevant data of ellipses were obtained through ARCGIS Software and attributes of standard deviation ellipses are listed in Table 5. As shown in Figure 3, the spatial distribution of eco-efficiency of China's ITHs exhibited an overall Northeast-Southwest pattern from 2011 to 2021, with a tendency to shift towards the Northeast, which means that hubs located in Eastern and Northern China are well ecological developed.The gradually increasing rotation angle of the ellipse tended to be stable, showing that this spatial pattern has become relatively stable.The fluctuation in the transfer distance of the centre of gravity is significant, with the largest distance observed during the period of 2014-2017, indicating an imbalance in the regional development of eco-efficiency, with significant differences in the east-west direction.The gradual decreasing of ellipse's area reflects that the spatial distribution of eco-efficiency is shifting from dispersion to concentration, showing a tendency to gather in the eastern and northern regions.The implementation of China's green and low-carbon development strategy in transportation has accelerated the change in eco-efficiency, with a clearer direction but more serious gradient phenomenon, hence attention should also be paid to the coordinated development among ITHs. Analysis of factors influencing eco-efficiency After investigating the current status and spatial imbalance of eco-efficiency of China's ITHs, further analysis of influencing factors is necessary to propose specific measures to improve the eco-efficiency level.Drawing on relevant research achievements, this study defines the eco-efficiency value of each ITH as the dependent variable and selects influencing factors from the perspectives of hub economy development, environmental protection and transportation development.Seven indicators are chosen as independent variables (as shown in Table 6).We use a panel dataset of 1,540 observations from twenty ITHs in China during 2011-2021.The data is sourced from "China Statistical Yearbook", "China Urban Statistical Yearbook" and the official websites of each city.Among them, carbon emissions from transportation are estimated to account for approximately 10% of urban carbon emissions based on previous studies [56].Descriptive statistics of the data are presented in Table 7. Before conducting the Tobit regression analysis, it is necessary to test for multicollinearity among the selected independent variables.We use the variance inflation factor (VIF) to test for collinearity.General experience suggests that, if VIF < 10, it indicates the absence of multicollinearity among the independent variables [57].The results of the multicollinearity test by STATA 16.0 Software are shown in Table 8.The mean value of VIF is 2.95, with a maximum value of 4.67.The VIF values of each independent variable are all below 5, with a mean value below 3, therefore it can be concluded that there is no multicollinearity, and the Tobit regression results can be used for analysis.STATA 16.0 Software is utilised in this paper to calculate Equation 2 and perform LR test.The calculation results are shown in Table 9.The result that Prob≥Chibar2=0.000has proved overall significance of regression model, and the well-fitting effect of regression coefficients.Among the seven independent variables, five of them have passed the significance test.The specific analysis of the influence of each variable on eco-efficiency of ITH is as follows. The regression coefficient of economic development level is 0.3578, passing the significance test at the 1% level, which indicates a positive correlation with eco-efficiency, and thus the economic development of China's ITHs can promote the improvement of eco-efficiency.Eco-efficiency is the ratio of output to input and the optimal result should be to achieve more output with as little input as possible.In this sense hubs have effectively invested economic factors, that means the continuous investing in factors can also generate corresponding levels of benefits in ecological development, thereby advancing eco-efficiency.For this reason, ITHs should further expand effective investment in transportation, gather advantageous resources and efforts, promote efficient and green transportation modes and improve the efficiency and sustainable development quality of hub services with smaller investments, in order to enhance hub function. The regression coefficient of level of opening-up is 0.4011, passing the significance test at the 5% level, which indicates a positive correlation with eco-efficiency.Transportation is crucial for a country's opening-up and cooperation with the outside world.In recent years, China has continuously increased its opening-up efforts and actively promoted international exchanges and cooperation in the field of transportation.Significant progress has been made in the joint construction of global sustainable transportation, participation in green transportation cooperation projects and the construction of multimodal cross-border transportation corridors.Taking advantage of this opportunity, various localities are going all out to promote the construction of low-carbon ITHs.By upgrading integrated hub systems and optimizing the combination of various transportation modes, the eco-environmental quality of ITHs has been significantly improved, greatly enhancing the level of hub's eco-efficiency. The regression coefficient of urban green coverage is 1.1217, showing a positive correlation with eco-efficiency at a significance level of 10%.Practice has proved that good urban green level, especially road green level, can effectively improve the ecological environment and considerably contribute on the reduction of vehicle exhaust emissions for air purification.With the increasing attention to urban greening in China, ITHs are continuously enlarging local green area, gradually perfecting the ecosystem, which basically meets the development requirements of road construction.In the future, the expansion of green space will continue to be an important way to enhance eco-efficiency.It is worth noting that due to the current land shortage of urban road in most hubs, the effective utilisation of urban green rate should be implemented to achieve a balance between ITH construction and ecological environment. Level of environmental protection input has a promoting effect on eco-efficiency, but it does not pass the significance test.Environmental protection and low-carbonisation of transportation vehicles are the most concerning issues around the world.The reason for failing to pass the test is that the current investment in transportation environmental protection falls behind the accumulation speed of transportation pollutants, and various input factors have not yet been fully utilised during the continual improvement on the layout of ITH.Owing to the potential and lagging characteristics of the promoting effect, hubs need to continue to make persistent efforts for environmental protection investment in transportation to steadily enhance the level of transportation eco-environmental protection in the future.Densitsy of transportation carbon emissions shows a negative inhibitory effect on the eco-efficiency of ITHs, but the effect has not yet been shown.Density of transportation carbon emissions in the paper refers to the ratio of transportation carbon emissions to total passenger and freight turnover, which is the carbon emissions generated from the completion of unit turnover by ITH, reflecting the carbon emission efficiency of ITH.The reasons may lie in the following two aspects.China's ITHs are currently in a period of rapid development with the continuous expansion of transportation facilities, which has stimulated a significant increase in transportation demand.This is reflected in the fact that the growth rate of transportation volume is higher than that of carbon emissions.Therefore, the carbon emission density is relatively low and its effect on eco-efficiency is not significant enough.Meanwhile, in recent years, China's transportation industry has achieved significant results in green and low-carbon development, with a continuous decrease in carbon emissions per unit of GDP, which has to some extent weakened the negative impact of carbon emissions on eco-efficiency.In the future, ITHs should further improve their transportation organisation efficiencies through optimisation of transportation demand structure, strengthening of low-carbon transportation technology and adjustment of transportation energy structure, ensuring that carbon emissions do not lead to a significant decrease in eco-efficiency. The regression coefficient of the level of scientific and technological input is -0.1709, representing an inhibitory effect on eco-efficiency at a significance level of 1%, which implies that more scientific and technological inputs do not necessarily lead to the higher eco-efficiency.The pursuit for investment scale but neglection of transformation and application of achievements will bring out inefficient investment reversely and hinder the application of modern technology and innovation of the emerging technology, resulting in the obstacles to eco-efficiency improvement.The current transformation rate of scientific and technological achievements in the transportation field of China is relatively low and the mismatch between scientific research achievements and market demand is one of the most important reasons.ITHs should accordingly seek out own transportation demand based on the development orientation of each hub, conduct the targeted investment and R&D in science and technology, pay attention to the matching between initial investment and output application, and ultimately form a virtuous interactive system for the transformation of technological achievements, providing a strong support for eco-efficiency improvement. The regression coefficient of integrated transportation efficiency is 1.1655, showing a positive correlation with eco-efficiency at a significance level of 10%.Transportation efficiency reflects the effective utilisation of transportation resources.Over the years, China has regarded promoting green and low-carbon transformation as a strategic task for sustainable transportation development, continuously promoting the conservation, intensification and recycling of transportation resources.Especially with the construction of national integrated transportation system, ITHs have made significant progress in constructing green transportation infrastructure, optimising transportation structure and integrating transportation resource elements, and those lead to the growth of eco-efficiency level. Based on the above stated findings, we can offer the following policy recommendations.ITHs are usually economically developed and have abundant transportation resources, with a high degree of agglomeration of various factors.In the process of promoting eco-hub construction, ITHs should guide more investment of various funds for transportation resources that meet ecological requirements, ensure that eco-hub development matches capital investment to maximise urban eco-efficiency.As an important node of national integrated transportation system, ITHs may generate more transportation pollution than that of general regions and the situation is also more complex.Therefore, it is necessary to strengthen the top-level design of eco-hub and establish integrated transportation planning from macro to specific levels in the view of ecology, that is, ITHs should actively participate in global cooperation and exchanges in the field of sustainable transportation and jointly build green and low-carbon transnational transportation corridors; should take the approaches of sharing green technologies, cross-regional intelligent transportation and establishing cooperation mechanisms to improve transportation resource utilisation; and should promote the coordinated development between hub construction and hub ecosystem, continuously increase investment in environmental protection input of transportation, and improve the transformation of transportation scientific and technological achievements guided by actual market demand.As a result, the function of ITH is strengthened at the international, intercity and hub levels, for the purpose of creating an ecological integrated transportation system by maximising and optimising the utilisation of limited space and transportation resources. CONCLUSIONS This paper proposes an evaluation method for the eco-efficiency of ITH based on super-efficiency EBM model, analyses factors influencing eco-efficiency through panel Tobit regressive model, and all the results are consistent with the actual situation.The following conclusions were drawn: 1) In 2011-2021, the average eco-efficiency levels of each international ITH ranged from 0.573 to 1.395, and half of hubs' values were greater than 1.The average eco-efficiency of all international ITHs in China was 0.974, indicating that China's ITHs developed well as a whole, but have not reached an efficient state yet.Due to the earlier acceleration of transportation infrastructure construction and gradual implementation of low-carbon transportation strategies later, the overall eco-efficiency average of ITHs in China declined first and then increased over the 11-year period.There was no polarisation phenomenon, but the gradient distribution characteristics were obvious among ITHs.Among them, Guangzhou ranks first, followed by Haikou and Harbin ranks last. 2) The application of Tobit model to analyse factors influencing eco-efficiency of ITH reveals that the economic development level, urban green coverage, level of opening-up and integrated transportation efficiency had a significant positive impact on eco-efficiency.Among them, the greatest impact on eco-efficiency arose from integrated transportation efficiency, followed by urban green coverage, level of opening-up and economic development level.Level of scientific and technological input has an obvious negative impact on eco-efficiency of ITH, which currently hinders the ecological development of ITH.Level of environmental protection input and transportation carbon emission efficiency did not show a significant impact on eco-efficiency. 3) The limitation of the study is that the lack of some original data may affect calculation accuracy.For example, individual indicators data, such as freight mileage of railway transportation, was missing in some years, so the paper used the grey prediction model to predict and supplement data with characteristic of time series.The data on transportation carbon dioxide emissions cannot be directly obtained from relevant departments, so the paper used a proportional method approximately to estimate the data by multiplying the total urban carbon dioxide emissions by the proportion of transportation carbon dioxide emissions.These data processing processes may result in certain errors of calculation.Another limitation is the quantitative analysis on how each factor affects the eco-efficiency of ITH.Although the Tobit model has analysed the regression relationship between them, it cannot explore the dynamic impact mechanism of each factor on eco-efficiency and there is insufficient research on the impact trend.Therefore, further research should focus on studying the dynamic correlation mechanism between changes in various influencing factors and eco-efficiency.In addition, further cluster analysis or potential category analysis can also be conducted on ITHs according to the main factor indicators affecting eco-efficiency, in order to identity the indicator standards and their development laws that different categories of hubs should follow with the goal of achieving integrated transportation's ecological development, and then provide more specific improvement suggestions on corresponding indicators.Especially in tracking the impact by the introduction and promotion of relevant policies on the overall eco-efficiency of all hubs and that of each category of hubs, the regression model proposed in this study can be verified and supplemented. Figure 1 that five indicators of F 1 , F 3 , F 4 , F 5 and F 8 are located in the first quadrant with high centrality and high causality, which are the main factors affecting eco-efficiency of ITHs and can Figure 3 - Figure 3 -Spatial transfer map of eco-efficiency of ITHs in China Table 1 - Calculation results of preliminary evaluation indicators using DEMATEL method Table 3 - Descriptive statistics of the independent variables Table 2 - Evaluation indicator system for the eco-efficiency of ITHs in China X=[x 1 ,x 2 ,...,x t ]!R q×t , Y=[y 1 ,y 2 ,...,y t ]!R q×t , and B=[b 1 ,b 2 ,...,b t ]!R q×t , respectively.The super-efficiency EBM model with undesirable outputs is expressed by Equation1, where ρ * is the value of eco-efficiency; φ is the output expansion ratio; θ is the planning parameter of the radial part; ε x , ε y , ε b are key parameters of the non-radial part and 0≤ε≤1; s i Table 4 - The eco-efficiency measurement results of ITHs in China * Aver.indicates the average value. Table 6 - Indicator system for factors influencing eco-efficiency of ITHs in China Table 5 - Attributes of standard deviation ellipse Table 9 - Results of Tobit regression Table 7 - Descriptive statistics of the independent variables N represents the sample size of each variable, which is the result of multiplying the number of research periods (11 years) and the number of research subjects (20 cities).The total of observations is the product of the sample size of eachvariable multiplied by the number of variables (7 variables). * Table 8 - Results of VIF test
10,290
2024-04-30T00:00:00.000
[ "Environmental Science", "Engineering", "Economics" ]
Electro-optic correlator for large-format microwave interferometry: Up-conversion and correlation stages performance analysis In this paper, a microwave interferometer prototype with a near-infra-red optical correlator is proposed as a solution to get a large-format interferometer with hundreds of receivers for radio astronomy applications. A 10 Gbits/s Lithium Niobate modulator has been tested as part of an electro-optic correlator up-conversion stage that will be integrated in the interferometer prototype. Its internal circuitry consists of a single-drive modulator biased by a SubMiniature version A (SMA) connector allowing to up-convert microwave signals with bandwidths up to 12.5 GHz to the near infrared band. In order to characterize it, a 12 GHz tone and a bias voltage were applied to the SMA input using a polarization tee. Two different experimental techniques to stabilize the modulator operation point in its minimum optical carrier output power are described. The best achieved results showed a rather stable spectrum in amplitude and wavelength at the output of the modulator with an optical carrier level 23 dB lower than the signal of interest. On the other hand, preliminary measurements were made to analyze the correlation stage, using 4f and 6f optical configurations to characterize both the antenna/fiber array configuration and the corresponding point spread function. I. INTRODUCTION During the last decades, ultra-sensitive radio astronomy instruments have been used to characterize the Cosmic Microwave Background (CMB).The CMB is the thermal radiation left over from the time of recombination after Big Bang that was hypothesized by Gamow, Alpher, and Herman in the late 1940s 1 and later accidentally discovered by American radio astronomers, Penzias and Wilson in 1964. 2 Since then, some ground-based experiments [3][4][5][6][7] and space missions [8][9][10][11] have been dedicated to the analysis of temperature and polarization anisotropies of the CMB at different frequency ranges.There is also some future experiments [12][13][14] trying to improve the sensitivity reached by actual experiments with the aim of measuring the B-mode polarization pattern predicted by inflationary models of the early Universe. Most of the present active experiments operate as direct image telescopes whose number of receivers is limited by the space available in the focal plane.Therefore, alternative ways to achieve better sensitivities must be considered, especially in the lower bands of the microwave range (10-50 GHz).In particular, the construction of an interferometer overcomes the space limitation of the direct image telescopes by potentially correlating a much larger number (hundreds or even thousands) of elements. However, the number of elements in CMB interferometers has been usually reduced due to the limitations of traditional analogical correlators in terms of phase controlling and routing a high number of wide-band microwave signals.Other options a) Author to whom correspondence should be addressed.Electronic mail<EMAIL_ADDRESS>digital correlators were also discarded because they would need an unaffordable number of digitizing cards, as Fieldeffect Programmable Gate Arrays (FPGAs), to cover the high number of wideband signals required in CMB experiments.A novel alternative already implemented in embedded instruments designed for security and defense 15 is the use of electro-optical correlators that allow a drastic reduction in the complexity of correlating a big amount of wideband microwave signals.The goal of the reported work is to explore the viability of an electro-optic correlator for a future largeformat interferometer intended to characterize the lower frequency bands of CMB.In particular, a correlator prototype to cover a bandwidth from 10 to 12 GHz is being developed.This reduced bandwidth is due to the use of low-cost modulators, required to overcome budget limitations.This correlator is part of an interferometer prototype with four receivers expected to be installed at Teide Observatory (TO) in Tenerife (Spain).This kind of interferometers provides a synthetized image of the polar parameters in the sky region determined by the instrument Field of View (FoV), which is mainly determined by the beam of the horn antennas. This work describes and analyzes the stability and the carrier suppression level achieved in the modulators of the upconversion stage by implementing a closed-loop feedback 16 using a photodiode at the output of the modulator and both, a commercial bias controller or a simple lock-in amplifier circuit, to generate the modulators bias signal.On the other hand, in order to have a wider vision about the interferometer operation, some preliminary tests were made in order to characterize the near-infra-red (NIR) correlation process held by optical components. The document is divided into seven sections.The first one is an introduction followed by a description of the prototype in Section II.Section III focuses on the modulator analysis, and Section IV describes the experimental characterization.Section V provides the results achieved using two different techniques to stabilize the DC bias point of the modulator.Section VI describes the optical setup implemented for preliminary measurements related to the NIR correlation stage.Finally, Section VII draws general conclusions of this work. II. INTERFEROMETER PROTOTYPE DESCRIPTION The prototype reported in this work will be composed of four microwave receivers covering the 10-20 GHz frequency band and the proposed NIR correlator.Because of the existence of a prohibited frequency band from 14 to 16 GHz at TO due to interferences, the incoming signal is divided into two sub bands, 10-14 GHz and 16-20 GHz, respectively.Following the scheme of the QUIJOTE experiment instrumentation, 17 the four output signals from each microwave receiver are proportional to combinations of Stokes parameters.Afterwards, in the correlator up-conversion stage, these microwave signals will modulate the signal of a SFLS1550S laser diode.The selected laser has a linewidth lower than 100 kHz, a maximum optical power of 40 mW, and a center wavelength of 1550 nm.Its performance was evaluated using a TEC controller in order to find a stable, temperature and current, operation point free from multi-mode behavior.With this characterization, we fixed 27 • C and 320 mA as the working point.In this situation, the optical power at the output of the laser is 24 mW (Fig. 1). Once selected a stable operating point of the laser diode, its output signal is divided by optical couplers and introduced at modulator inputs.Later, microwave signals will be up-converted and correlated by optical components to measure the polarization of CMB radiation from the sky, getting the Q, U, and I Stokes parameters of the incoming signal simultaneously.The analysis of the modulator performance and the correlation stage are described Sections III-V.A simplified schematic of the final prototype is shown in Fig. 2. III. MODULATOR ANALYSIS The modulator selected to implement the up-conversion stage was X-2623Y from Lucent Technologies with an operating wavelength of 1550 nm and a typical microwave bandwidth of 10 GHz. A. Bandwidth Since the modulator bandwidth specifications, it was used to implement the up-conversion stage for the lower-frequency band of the prototype.Its internal circuitry consists of a singledrive modulator biased by a SubMiniature version A (SMA) connector allowing to up convert the microwave input signals from the back-end of the receivers to the NIR band.Fig. 3 shows the modulator output spectrum when a 10 GHz tone is applied and the attenuation of the up-converted microwave signal while frequency increases from 10 to 14 GHz when the modulator is not DC biased.As the measured attenuation increases a lot with frequency, it was decided to characterize the modulator using a 12 GHz tone, in coincidence with the center of the lower frequency band. B. DC bias Depending on the application, the modulator can be DC biased or not.For the particular application of this work, it is very important to suppress the optical carrier at the output of each modulator as much as possible, while operating in a stable DC bias point.This means that the carrier and up-converted signal levels should maintain the same values during the overall operation time of the instrument, taking into account that this kind of instrumentation operates during large periods of time to get the required measurement sensitivity. C. Stability vs time The stability of the modulators performance will define the correlator quality, since the instrument will be calibrated at the starting of its operation and the achieved performance should be maintained during the observation period.If not, high errors in phase or amplitude will appear producing a reduction in the quality of the scientific results. 18Consequently, keeping a stable output spectrum vs. time will reduce the complexity of analyzing the data at the output of the correlator. IV. EXPERIMENTAL CHARACTERIZATION The up-conversion stage for one receiver output signal was implemented to characterize the stability and the carrier suppression level that could be achieved using one of the sixteen modulators that will implement the complete upconversion stage.In order to symbolize this scenario, the signal from the laser was introduced into an optical attenuator which helps to modify the incoming optical power.After the attenuation stage, a 1 × 4 optical coupler was used to represent the distribution of the laser signal to all the modulators that will compose the final prototype.As the optical components are not fused yet, commercial transitions were used to interconnect them, but their losses are negligible for these tests results. First measurements were made without DC biasing the modulator.Since the microwave receivers are still under designing phase, a 12 GHz and 10 dBm tone was reproduced by an Agilent Technologies E8257D signal generator.Biasing the laser diode in the selected bias point, the optical power at the input of the modulator was 8 dBm. With this configuration and using a Brillouin Optical Spectrum Analyzer (BOSA) with a high spectral resolution of 0.082 pm, the output spectrum of the modulator is shown in Fig. 4. The separation between the carrier and each sideband is 0.096 nm, corresponding to the 12 GHz signal at the input of the modulator. Once obtained the NIR spectrum, it was measured several times in a period of 9 min in order to analyse its wavelength power stability.Figs.4(b) and 4(c) show the results from this characterization.In the case of wavelength stability, the modulator presented variations lower than 1 × 10 3 nm (0.125 GHz).Meanwhile, the power spectrum variations were lower than ±1 dB.Afterwards, a DC bias was applied at the SMA connector using a polarization tee (Fig. 5(a)).This new setup is intended to reduce the optical carrier from the infrared spectrum.Maintaining the 12 GHz modulation, a sweep varying DC bias in a range from 0 to 5 V was performed in order to study the carrier suppression levels that could be achieved with respect to the up-converted images of the microwave signals.Fig. 5(b) shows the power difference between the carrier and both sidebands vs. DC bias, and Fig. 5(c) represents the output spectrum obtained in the DC bias optimal point (2.3 V) where the carrier is near 10 dB lower than the sideband images.This result was very promising, but the modulator operating point was not stable in time, mainly due to the influence of internal temperature variations induced by the bias voltage itself (Fig. 5(d)).As a consequence, some kind of stabilization method was required to achieve a good level of carrier rejection during large periods of time. V. STABILIZATION TECHNIQUES After analyzing the behavior of the modulator setting the DC bias manually, the up-conversion stage was modified adding an inline optical power monitor photodiode at the output of the modulator in order to implement a closed-loop feedback for the bias system.This element generates a DC output voltage proportional to the input NIR power.This output signal is then used as a feedback to stabilize the modulator operating point and minimize the optical carrier by two different techniques. A. Commercial bias controller The signal from the photodetector was introduced in a commercial bias controller (model MBC-DG-BT from Photline).The output signal from the photodetector was introduced in the controller to get a stabilized DC bias that was injected to the modulator to close the feed-back loop (Fig. 6(a)).With this new setup, the optimal DC bias point was automatically adjusted to 2.23 V giving the output spectrum shown in Fig. 6(b).This measurement shows the advantages of using a feedback closed-loop, getting a stable spectrum in wavelength and amplitude, and suppressing 36 dB the optical carried with respect to the case in which the DC bias was not used.With this configuration, the carrier level was 13 dB lower than the microwave signal.To this point, it would be necessary to use a filtering technique in the correlation stage in order to reject as much as possible the remaining optical carrier and one of the two sidebands from the modulated NIR signals. B. Lock-in amplifier Since each bias-controller is only able to stabilize a single modulator, the previous solution is not viable for the particular application of this work, taking into account the number of signals to control (4 signals per receiver) and the cost of the bias controllers.As an alternative to the use of commercial DC bias controllers, another solution using the ADA2200 evaluation card from analog devices (Fig. 7) is proposed.This card is based on a synchronous demodulator with a configurable analogue filter designed to make low power magnitude and phase measurements with a very high precision.Besides, this card acts as a low-cost lock-in amplifier and allows the implementation of a feedback closed-loop to lock the bias of the modulator in a stable optimal point. In order to implement the feedback stage using the lock-in, three signals are needed: the output from the photodetector, the lock-in amplifier reference signal, and a DC bias.The output signal from the photodetector is amplified using the evaluation card as a lock-in amplifier, by setting-up the card in its default configuration.Fig. 8 shows the photodetector output signal before and after being amplified by the ADA2200 evaluation card. The previous signal is then added to the reference obtained from the evaluation card Reference Clock (RCK) output and to the initial bias signal.The reference is a 3.2 V pp , 6.25 kHz squared signal that produces a small modulation on top of the optical pulses to ensure an optimal detection. Applying the overall signal to bias the modulator, the output signal alternated between two states at a rate fixed by the reference signal.A stable operating point was impossible to be achieved by varying the amplitude or the frequency of the reference signal.For this reason, it was decided to implement a passive feedback without using the reference signal.Using only a DC bias added to the output of the lock-in, it was achieved by a very stable operating point getting 46 dB carrier suppression at the output of the modulator.With this solution, the optical carrier was 23 dB lower than the sidebands.Fig. 9 shows the results obtained in a period of 15 min time. Although this last result is very promising, it would be necessary again for the introduction of a filtering technique in the correlation stage in order to remove the undesirable components from modulator outputs. VI. CORRELATION STAGE Up to now, since the lack of a Volume Bragg Grating (VBG) optical filter to remove the undesired spectrum components from the up-converted signals, correlation stage preliminary measurements were made using directly the laser diode as a source signal.This signal was divided through optical couplers in order to be distributed and introduced into a bundle.The signals from the output of the bundle were correlated by a couple of 100 mm focus lenses and finally detected by a Xenics Xeva NIR camera whose thermo-electrically cooled InGaAs detectors are able to detect very weak optical signals, with powers lower than 60 dBm.The implemented setup to characterize the correlation stage is shown in Fig. 10. In order to group the signals from the 1 × N coupler, a hexagonally packed bundle with a fixed pitch array of 250 µm was designed (Fig. 11).It was decided to fabricate a 46 element bundle in order to have flexibility to optimize the distribution of different 2D antenna array scenarios for the prototype.Although the correlator prototype will have to combine 16 signals (4 groups of 4), there were 20 signals available from the coupling stage to connect to the bundle for preliminary tests. The bundle should directly map the position of the receiver antennas in a scaled fashion (homothetic mapping). 19This means that the optical fibers must be precisely positioned to match the antenna array, thereby necessitating the ability to precisely inject the modulated signals in an array of fibers matching the optimal antenna array of the prototype.This is the reason to select a hexagonal package for both antennas and fibers. In relation to the optical configuration of the lenses, we have used two different ones.A 4-f configuration was used to pass the optical beams through the lenses and reimage the fiber distribution illuminated in the bundle.This is useful for component alignment adjustments and other diagnostic purposes.As an example of this configuration, two different signal distributions were illuminated in the bundle.The first one consisted on illuminating a compact geometry of 20 fibers and the second using only 16 signals representing a possible distribution for the interferometer prototype with four receivers (Fig. 12). With the implementation of the 4-f configuration, some differences in the intensity among signals were detected due to the existence of imperfections in some fibers within the bundle.Besides, it would be necessary to improve the fiber alignment in the bundle and the manufacture imperfections for future measurements.After obtaining the first images with the NIR camera, a 6-f configuration was implemented.This configuration permits to pass the optical beams through the lenses and synthesize the image of the sky region covered with the instrument.In the reported case, the point spread function (PSF) of the fiber array distribution can be observed.Fig. 13 shows the 6-f setup and the PSF obtained with the scenario detailed in Fig. 12(c). PSF results from the previous figure show the existence of aberrations due to flaws in the optical elements that imply the degradation of the image obtained by the de NIR camera.In order to correct these aberrations, a diaphragm was introduced in the optical system to limit the light incident to the camera (Fig. 14(a)).The achieved PSF is shown in Fig. 14(b). Future work related to the correlation stage performance analysis will consist of studying different PSF results obtained by changing the fiber distributions in the bundle once it is improved.Also, a more realistic setup using the up-converted signals from the modulators and an appropriated VBG optical filter will be implemented.Likewise, taking advance of the configuration possibilities of the NIR camera, it will be necessary to get the required configuration to achieve an optimum operation of the correlation stage.Finally, the utility of implementing some kind of electronic phase-control to implement an imager focusing method will be also analysed. VII. CONCLUSION An up-conversion stage and preliminary signal correlation measurements for an electro-optic correlator prototype, with application to microwave large format interferometry, have been described.The proposed prototype will obtain the synthetized images of the Stokes parameters of an incoming wide-band linearly polarized signal by means of a real-time measurement in the NIR range. Two different modulator operation point stabilization techniques in order to achieve good levels of optical carrier suppression during large periods of time have been reported.The stabilized modulator exhibited good performance obtaining 46 dB carrier suppression in comparison with the case in which the modulator was not DC biased.With this configuration, it was achieved a stable carrier level 23 dB lower than the signal of interest in the NIR spectrum. On the other hand, a simplified correlation stage, based on the use of a two 100 mm focus lenses, has been presented.The measurements reported here provided information about the antenna array distribution in the fiber bundle and the PSF related to the selected fiber array configuration.Preliminary results of this correlation stage show that it would be necessary to correct some fiber bundle manufacturing errors, in order to reduce the measurement uncertainties before starting with the PSF analysis to achieve the optimal antenna/fiber array configuration.
4,575.2
2017-04-10T00:00:00.000
[ "Physics" ]
Noise Source Identification Method for a Carpet Tufting Machine Based on CEEMDAN-AIC In recent years, the noise reduction research of the carpet tufting machine has been developing slowly. The research gaps of the existing work mainly focus on the noise source identification for the carpet tufting machine. MEEMD (EEMD) has been proposed to apply to source recognition on textile machinery. Due to the uniqueness of the MEEMD/EEMD, it is difficult to set suitable white noise control parameters. MEEMD (EEMD) has only been tested via simulation; however, it has not been mathematically proven or evaluated. This leads to inevitable flaws in the research conclusions, and even some conclusions are wrong. The contribution of this paper is twofold. First, in order to recognize the noise source of a carpet tufting machine, a method based on complete ensemble empirical mode decomposition (CEEMDAN) and Akaike information criterion (AIC) is proposed. The CEEMDAN-AIC method is applied to measure the noise signal of a carpet tufting machine and analyzed every single effective component selected. Noise source identification is realized by combining the vibration signal characteristics of the main parts of the carpet tufting machine. CEEMDAN is used to decompose the measured noise signal of the carpet tufting machine into a finite number of intrinsic mode functions (IMFs). Then, singular value decomposition (SVD) is performed on the covariance matrix of the IMF matrix to obtain the eigenvalue. Next, the number of effective IMFs is estimated based on the AIC criterion, and the effective IMFs are selected by combining the energy characteristic index and the Pearson correlation coefficient method. Furthermore, reconstruction and comparison of the decomposed signals of MEEMD and CEEMDAN proved that CEEMDAN is effective and accurate in source recognition. The results show that the noise signal of the carpet tufting machine is a mixture of multiple noise source signals. The main noise sources of the carpet tufting machine include shock caused by the impact of the tufted needle and looped hook and vibration of the hook-driven shaft and pressure plate. It provides theoretical support for the noise reduction of the carpet tufting machine. Introduction stipulates that, for single-level noise, continuous noise exposure shall not exceed 80 dB when working for more than 8 hours. In 2013, China's newly revised GB/T 50087-2013 [2] stipulates that the noise of the workshop is 85 dB. At present, the noise of the textile workshop is generally above 85 dB. Even with improvement of the characteristics of the carpet tufting machine, such as making it wide, heavy, high speed, and complex in the mechanism, the noise generated is greater. erefore, research on the noise reduction of textile machinery is urgent and significant. Generally speaking, the structure of the carpet tufting machine is as complex as its transmission path. e weaving process includes high-speed rotation, reciprocating, multimotion coupling, impact, friction, and other conditions. It leads to more than one noise source. Even the same sound source often has multiple parts that produce sound, and the noise condition is unusual and complex [3][4][5]. erefore, it is necessary to understand the characteristics of each sound source and the weight of its total noise. en, locate and identify the main noise sources in order to formulate reasonable noise reduction measures for a carpet tufting machine with multiple noise sources. Noise energy is usually concentrated in the low-frequency band. Noise source feature extraction is one of the key techniques for source identification, especially for the carpet tufting machine. erefore, how to accurately extract the source feature from complex noise is always a difficult problem in the textile industry. Some research studies use classical Fourier analysis and wavelet transform as the basis of noise signal processing [6][7][8]. e Fourier analysis cannot express the time-frequency local performance of the signal. e results cannot reflect the real features for the target signal well [9,10]. Wavelet transform can multiscale refine the signal by calculating flex and transition. It can solve the problem that the size of the Fourier transform window cannot change with frequency. However, wavelet transform is still based on Fourier analysis and limited by the selection of the wavelet basis function and decomposition layer. For carpet tufting machine noise signal processing, we hope that we can not only get the frequency information of the signal but also get the law of the frequency changing with time. As an empirical signal analysis method, empirical mode decomposition (EMD) overcomes the limitation of Fourier transform fundamentally and can theoretically decompose any signal into IMFs [11][12][13][14]. e major drawback of the EMD is the frequent appearance of mode mixing. Some experts have put forward improved EMD algorithms to solve mode mixing of EMD. Among them, the more universal and effective algorithms are ensemble EMD (EEMD) [15,16]. Although EEMD reduces mode mixing, it has greater error between the original signal and the reconstruction signal. For textile machinery, Xu et al. [3,4] proposed modified ensemble empirical mode decomposition (MEEMD). Although MEEMD reduces the reconstruction error to a certain extent, it still cannot meet accuracy requirements. Marcelo and Gastón [17] presented complete ensemble EMD with adaptive noise (CEEMDAN). It can solve mode mixing and reduce calculation with negligible reconstruction error [18][19][20]. In this paper, the decomposition results of CEEMDAN and MEEMD are compared. CEEMDAN has higher accuracy, and it is more suitable for the noise source extraction for the textile industry. e CEEMDAN algorithm is combined with the Akaike information criterion (AIC) source number estimation method, and the CEEMDAN-AIC method is for the noise source identification of the carpet tufting machine. en, the CEEMDAN-AIC method is applied for the identification of the noise source of a carpet tufting machine, and its main noise source is accurately identified. is can provide theoretical support for the active noise reduction of the carpet tufting machine. Noise Characteristic Analysis of a Carpet Tufting Machine 2.1. Structure of the Carpet Tufting Machine. In this paper, the experiment object is a four-meter tufted carpet tufting machine. It is shown in Figure 1. e carpet tufting machine is mainly composed of four parts: host system, loop-forming system, yarn feeding system, and needle bed and traverse system. e host system is used to make the rotary motion of the spindle transformed into the loop/cutting motion of the loop hook swinging left and right and the flocking motion of the tufting needle up and down. e loop-forming system is used to complete the loop-forming movement of the carpet tufting machine. e yarn feeding system is used to send yarn to the loop forming. e needle bed and traverse system is used to put yarns of different colors inserted into the tufted pinholes at intervals. Assist the loop-forming system in completing the fabric weaving. Under the operating condition that the main shaft speed of the carpet tufting machine is 360 rpm, LV-FS01 and Quick Signal Analyzer (Quick SA) real-time signal analysis software are used to collect the vibration signals of the main vibration parts of the carpet tufting machine, and the collected vibration signals are analyzed one by one. e main vibration frequencies of the parts are shown in Table 1. e sampling frequency is set to 2048 Hz, and the sampling time is 20 s. e Acquisition and Preprocessing of Noise Signals. Under the same conditions, the noise signals of the carpet tufting machine near the workers' ears are collected. According to GB/T 7111.6-2002 "Textile machinery-Noise test code-Part 6: Fabric manufacturing machinery," the sound pressure sensor is arranged at a distance of 1 m from the machine surface and at a height of 1.6 m from the table position. e noise signals are collected in this experiment by using a sound pressure sensor BK4961 combined with the DHAS dynamic signal analysis system. e sampling frequency is 8192 Hz, and the sampling time is 20 s. A total of 6 experiments are carried out, and the experimental site layout is shown in Figure 2. All the noise signals collected are analyzed preliminarily. In order to improve the computational efficiency, a typical data length of 1 s is selected as the analysis object. e signal waveform and spectrum obtained after the fast Fourier transform of the signal are shown in Figure 3. It is seen in the figure that the frequency of the noise signals is mainly distributed below 400 Hz. And the frequency component is complex, mainly composed of low-frequency noise within 0-300 Hz. Table 2. CEEMDAN-AIC Noise Recognition Algorithm e core of the CEEMDAN-AIC approach is the CEEM-DAN algorithm and AIC guidelines. e CEEMDAN-AIC algorithm flowchart is shown in Figure 5. (1) CEEMDAN decomposition of the signal: CEEM-DAN decomposition of the single-channel observation signal yields a finite number of IMF components. (a) Add two sets of positive and negative white noise signals of equal absolute value to the signal to be decomposed Multiple first-order components IMF i 1 are obtained by EMD of the new signal. e first-order final component IMF 1 is obtained by averaging several first-order components. e first residue can be expressed as (1) e second residue can be expressed as (2) (c) For k � 1, 2, 3, Λ, go to Step b for next k. e kth residue can be expressed as Finally, CEEMDAN can be expressed as where x(t) is the observation signal. (2) Estimate effective IMFs: e IMF matrix is obtained after the signal is decomposed by CEEMDAN. e IMF matrix's covariance matrix is decomposed by SVD. Multiple eigenvalues corresponding to IMFs can be obtained. e noise background of the carpet tufting machine is colored noise, and the accuracy of the AIC for estimating the number of sources in the background of colored noise is poor. erefore, Xu et al. [3] smoothed out the noise eigenvalues by diagonally loading the covariance matrix in order to make it to be applied to the background of colored noise. e corrected eigenvalue where N is the number of samples and L � 1, 2, Λ, m − 1. e AIC value of L is calculated from 1 to m − 1. L corresponding to the smallest AIC value is the number of effective components. Combining the energy characteristic index with the Pearson correlation coefficient [3], the total energy of each IMF and the correlation coefficient between each IMF and the original signal are calculated. According to the correlation coefficient of each IMF, all the IMFs are reordered to find out the most significant components. Noise Source Identification of a Carpet Tufting Machine CEEMDAN decomposition of the noise signal is done. e amplitude of the added white noise is 0.3 times of the RMS of the noise signal. e number of added white noise is 200. Seven IMFs are obtained. e decomposition result is shown in Figure 6. Estimation of Effective IMFs of Carpet Tufting Machine Noise. e covariance matrix of the IMFs is calculated and then decomposed by SVD. e eigenvalues were obtained: 0.0857, 0.0551, 0.0318, 0.0198, 0.0083, 0.0015, and 0.0002. e AIC after correction is shown in Figure 7. e smallest AIC corresponds to L � 4. en, the estimated number of valid IMFs is 4. Combining the energy characteristic index with the Pearson correlation coefficient, the total energy of each IMF and the correlation coefficient between each IMF and the original signal are calculated. e results are shown in Table 3. It can be seen from Table 3 that the energy and correlation coefficients of IMF1-IMF4 are large. Because the number of effective IMFs of the carpet tufting machine decomposed by CEEMDAN is 4, IMF1-IMF4 are effective. IMF5-IMF7 are not effective as their energy and correlation coefficient are small. Characteristic Analysis of Effective Noise and Noise Source Identification. Each amplitude-frequency figure of each effective IMF of carpet tufting machine noise shown in Figure 8 is obtained through the fast Fourier transform. Figure 9 shows the time-frequency of IMF1-IMF4. According to Figure 6 and Figure 9(a), it can be seen that IMF1 is a shock signal. It impacts about 6 times per second. is corresponds to the spindle motor speed (360 r/min). When the spindle motor is working, it rotates once in 0.2 s. At this time, the needle row punctures the base cloth every 0.2 s. e peak value appears on the amplitude-time figure of IMF1 every 0.2 s ( Figure 6). erefore, IMF1 is the shock noise caused by the impact of the tufted needle and loop hook. Background noise signals in the factory are collected and decomposed by CEEMDAN. e time-frequency of the background noise is shown in Figure 10. Comparing Figure 10 with Figure 9(b), it can be seen that the frequency distribution is similar and that all signal frequency characteristics change with time. erefore, IMF2 is derived from the background noise. e vibration time-frequency of the hook-driven shaft and pressure plate is shown in Figure 11. e amplitudefrequency of IMF3 and IMF4 is extracted from Figure 8 and Shock and Vibration amplified in the frequency axis, resulting in Figure 12. According to Figure 12(a), an obvious peak is at 58 Hz, the frequency does not change with time, and no periodic shock characteristics and concentrated frequency distribution were observed. e amplitude of 58 Hz is the largest in the whole frequency domain, and it is about 10 times of the spindle rotation frequency. Figure 9(c) and Figure 11 show that the phase difference between the pressure plate and the hook-driven shaft is about 180°and alternates with each other on the whole time axis. By measuring the main vibration of the hook-driven shaft and pressure plate (58.3 Hz, 52.7 Hz), it can be known that IMF3 is the vibration noise caused by the vibration of the hook-driven shaft and pressure plate. Figure 12(b) shows an obvious peak at 17 Hz. e distribution of frequency is narrow and below 20 Hz. e audible frequency range of the human ear is from 20 Hz to Shock and Vibration 7 20 kHz. erefore, IMF4 belongs to the infrasound wave, and it cannot be recognized by the human ear. It is not necessary for IMF4 to identify the noise source. Comparing with the MEEMD Noise Recognition Algorithm. In Section 4.1, CEEMDAN-AIC is directly applied to decompose and analyze the noise signal, but this is not prudent enough to verify the accuracy of the method. It is necessary to add one accurate calculation example to verify the accuracy of the method, and the results of CEEMDAN and MEEMD should be compared. According to the features of the measured noise signal, the simulation signal of the accurate calculation example can be expressed ( Figure 13): m 1 � 5e − (t− 500/100) 2 π cos 5π 6 (t − 1000) m 2 � cos 4π 125 (t − 1000) + 10 sin πt 2500 − 86 Shock and Vibration Simulation signal � m 1 + m 2 + m 3 . According to Figures 13-15, it is obvious that MEEMD presents strong mode mixing and contains the spurious component. For MEEMD, IMF1-IMF4 have large energy, and they do not represent real information of the simulation signal. For CEEMDAN, the decompositions were stopped once the IMF was satisfied for the current residue. Figures 16 and 17 show that compared with MEEMD, the reconstructed signal error generated by CEEMDAN is negligible, and the energy distribution is more uniform. It can well decompose the simulation signal into three effective components. CEEMDAN can represent real information of the simulation signal more accurately than MEEMD. MEEMD of the noise signal is shown in Figure 18. 10 IMFs are obtained. Comparing with Figure 6, it is obvious that MEEMD shows frequent appearance of mode mixing [4]. Each amplitude-frequency figure of each effective IMF of MEEMD shown in Figure 19 is obtained through the fast Fourier transform. e energy error of the MEEMD algorithm is very large. It can change the energy distribution of the modal component. is will seriously affect the identification of the noise source. e conclusions in [4] show that the noise of the carpet tufting machine is mainly composed of friction between the tufting needle and base cloth (IMF1), the shock noise caused by the impact of the tufted needle and loop decomposition results. is can reduce the appearance of the spurious component in decomposition results. In order to illustrate this problem, the error of the reconstruction signal of CEEMDAN and MEEMD should be compared. According to and Table 4, compared with the CEEMDN reconstruction signal, the energy error of the MEEMD reconstruction signal is 31.1%. It is obvious that CEEMDAN is more suitable for the noise source extraction of the carpet tufting machine. Meanwhile, the conclusions [4] are not accurate. Conclusions In this paper, using the CMEEMDAN-AIC algorithm combined with the carpet tufting machine structure characteristics and related experimental analysis, the noise sources of the carpet tufting machine are identified, concluded as follows: (1) A CEEMDAN-AIC algorithm which is applied to the noise source identification of the carpet tufting machine is presented. (2) e CEEMDAN-AIC algorithm is used to deal with noise measured near the ear of workers. Four effective IMFs of the noise of the carpet tufting machine are obtained. Among them, IMF2 and IMF4 are, respectively, background noise and infrasound. (3) Based on the analysis of IMF1 and IMF3, combined with the vibration characteristics of each machine part of the carpet tufting machine, it can be concluded that the noise is mainly composed of the vibration of the hook-driven shaft and pressure plate and the impact of the tufted needle and loop hook. Data Availability e research data are from previously reported studies and datasets. e related research data can be made available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest with respect to the research, authorship, and/or publication of this article.
4,128.6
2021-09-27T00:00:00.000
[ "Engineering" ]
Parameter estimation in a coupled system of nonlinear size-structured populations. Mathematical biosciences and engineering A least squares technique is developed for identifying unknown parameters in a coupled system of nonlinear size-structured populations. Convergence results for the parameter estimation technique are established. Ample numerical simulations and statistical evidence are provided to demonstrate the feasibility of this approach. Introduction A typical direct problem for structured populations is to use the knowledge of underlying mechanism at individual level such as growth, mortality and reproduction rates to deduce the behavior at population level.This approach has been extensively studied for many kinds of models which include structured and non-structured populations.In practice, however, our knowledge of the vital rates may be incomplete [40].In fact, in many animal and plant populations the processes at the individual level are not accessible to direct observation [47].For example, for nonlinear structured models the dependence of reproduction and mortality rates on the total population is sometimes completely unknown [37].Even for linear structured models, one may not be able to obtain the exact dependence of the vital rates on the age or size structure [40].In these cases, one resorts to an inverse problem approach, namely to use knowledge about the behavior at the population level (e.g, observations of total population numbers) to deduce the underlying mechanisms at the individual level. In recent years many researchers have focused their attention on developing methodologies for solving inverse problems governed by structured population models (e.g, [1]- [3], [12]- [17], [19]- [23], [25]- [34], [40]- [49]).In what follows, we briefly review some of the recent work on such inverse problems.For age-structured population models, several approaches have been developed to recover unknown individual vital rates.For example, in [40,43] a fixed point iterative technique was developed to determine the death rate from census data on the age distribution of the population.Therein, conditions on the data are given that lead to a unique solution.In [26] the authors formulated the inverse problem as an operator equation and the least squares method is then used to compute its solution.Due to the ill-posedness of the problem, a regularization technique was considered.In addition, the authors prove that the resulting scheme has a convergence rate of Hölder type.However, no numerical results were reported.A least squares approach was also adopted in [19] for a nonlinear agestructured population model to estimate unknown coefficients from a set of fully discrete observations of the population.Although the convergence of the computed minimizers to a minimizer of the least squares problem was established and numerical results were presented, for many real populations it is generally difficult to obtain discrete observations with respect to age, whereas other quantities such as total population number are easily obtained.In [25] a model describing the evolution in time of size/age structured population was considered. A moving finite element method was used to study the identification problem for such a model.Convergence results for the parameter estimation technique were reported.In [30], by writing a linear age-structured model using the cumulative formulation approach (see e.g., [24]), the authors studied the inverse problem of identifying the birth and death rates from data on the total population size and the cumulative number of births.They also provided conditions on the data that guarantee the uniqueness of the solution to the inverse problem. For size-structured population models, the least squares approach has been often used for parameter identification.For example, it was used in [15,16] to estimate the growth rate distribution in a linear size-structured population model.A similar technique was subsequently applied to a semilinear size-structured model in [34] where the mortality rate depends on the total population due to competition.In [2] an inverse problem governed by a phytoplankton aggregation model was studied.Convergence and numerical results for identifying the coagulation kernel were provided.Later, this technique was extended to identify parameters in a size-structured population model in [1,3] where all the individual vital rates (growth, mortality and reproduction) depend on the total population level.Therein, these parameters are identified from a set of observations corresponding to the total population number.A finite difference method was then used to approximate the infinite dimensional problem.Convergence results for the computed parameter estimates to the true parameter were established.To our knowledge, [3] was the first paper to provide convergence results for parameter estimates when the growth rate is a nonlinear function of the total population (i.e., the size-structured model is represented by a quasilinear first order hyperbolic initial boundary value problem). In this paper we extend the discussion in [3] to the following coupled system of quasilinear size-structured populations model: t + (g I (x, P (t; q))u I ) x + m I (x, P (t; q))u I = 0, (x, t) ∈ (0, L] × (0, T ], g I (0, P (t; q))u I (0, t; q) = C I (t) + N J=1 L 0 γ I,J β J (x, P (t; q))u J (x, t; q)dx, t ∈ (0, T ], (1.1) Here q = (q 1 , q 2 , . . ., q N ) with q I = (g I , m I , β I , C I ), I = 1, 2, . . ., N , the parameters to be identified.The function u I (x, t; q), I = 1, 2, . . ., N , is the parameter-dependent size density (number per unit size) of individuals in the Ith population having size x at time t, and The model (1.1), which was developed by the authors in [4], is a generalization of several size-structured population models (usually referred to as structured models with rate distributions) which have been investigated in [14,15,16,34].Motivated by the fact that, in addition to observable characteristics such as age or size of the individuals, non-observable genetic characteristics may often play a crucial role in the development of the individuals, researchers in [14] presented the first such generalization of the classical Sinko-Streifer model. This model, which is a linear version of (1.1), has vital individual rates that are independent of the total population and distributed over an an infinite-dimensional admissible parameter space with a probability measure.It was shown through numerical simulations in [14] that there is a crucial difference between the dynamics of distributed rate size-structured population models and the classical Sinko-Streifer models.In particular, the classical Sinko-Streifer model cannot have dispersion of the density of the population in age or size except under biologically unreasonable conditions on the growth rate [15].That is why the classical Sinko-Streifer models are in conflict with field data collected by experimental biologists.These data sets show that a population with unimodal distribution evolves into a bimodal distribution (see [14] and [41]).In [17] the authors used least squares approach to fit these distributed rate models to data obtained in [14].The resulting good fit indicates that the need for such modification is crucial if these models were to be used as prediction tools. In addition to extending the theory in [3] to the coupled quasilinear system (1.1), a main novelty of our current research is that we report on extensive numerical simulations.These simulations are then used to obtain statistical results (in the form of confidence intervals) which provide solid evidence on the feasibility of this approach.It is worth pointing out that with the exception of [28] the above-mentioned articles do not report on any statistical studies. As the use of numerical methods for estimating functional parameters becomes more widely accepted in the biological sciences, it is becoming increasingly important for investigators to support the efficacy of proposed numerical algorithms with not only numerical simulation results but also confidence intervals on estimated parameters.This can be done by calculating standard errors in a number of sophisticated ways (e.g., pointwise confidence intervals or bands as in [38,39,48], uniform bands [32], simultaneous confidence bands [31], etc.).Here we simply compute the pointwise standard errors using the pointwise sample variances from a large (1000) number of inverse problem simulations.While in our efforts we emphasize (regularized) ordinary least square estimators, the ideas and methods presented in this paper can readily be used with maximum likelihood estimators as well as other standard estimators found in the statistical literature. It is also worth noting another connection between statistical methods and our efforts in this paper.The models we use here involve a form of "mixing" distributions found in the literature on mixed effects, random effects or hierarchial methods (see for example, [20,21,22,35,36,46]). However the models we investigate entail mixing that cannot be decoupled into individual dynamics and thus result in fully coupled dynamics (see our closing remarks in Section 4). By a weak solution to problem (1.1) we mean a bounded and measurable function u(x, t; q) = (u 1 (x, t; q), u 2 (x, t; q), . . ., u N (x, t; q)) satisfying γ I,J β J (x, P (s; q))u J (x, s; q)dx ds for t ∈ [0, T ], I = 1, 2, . . ., N , and every test function We first impose a condition on the initial data: for any I = 1, 2, . . ., N (H1) u I,0 ∈ BV [0, L] and u I,0 (x) ≥ 0. where Depending on the values of the constants 0 ≤ γ I,J ≤ 1, the model (1.1) may have two different interpretations.If γ I,I = 1 and γ I,J = 0, I = J, the model represents the dynamics of several populations competing for common resources.On the other hand, if γ I,J > 0, I, J = 1, 2, . . ., N , then the model may describe the dynamics of one population consisting of N subpopulations, each with its own characteristics.Hence, γ I,J represents the probability that an individual of the Jth subpopulation will reproduce an individual of the Ith subpopulation.Therefore, two different ways for observing data will be considered. These lead to the following two different least-squares functionals to be minimized: The first one is based on the assumption that the model (1.1) describes N different competing populations.Hence observations Z I,k which correspond to the total number of individuals in the Ith population at time t k are assumed to be available (this case corresponds to γ I,I = 1 and γ I,J = 0, I = J).We define the least-squares cost functional for this case to be which is minimized over Q.The other case assumes that (1.1) models one species which has been divided into N not readily distinguishable subpopulations.In this case, we assume that we can only observe aggregate data Z k , the total number of individuals at time t k (this case corresponds to γ I,J > 0, I, J = 1, 2, . . ., N ).We define the least-squares cost functional which is minimized over Q.We remark that minimizing (1.4) over Q is equivalent to the maximum likelihood esti- are i.i.d.normal, and minimizing (1.5) over Q is equivalent to the maximum likelihood estimation of q if k = log The paper is organized as follows.In Section 2, we present a finite difference scheme for computing the solution of (1.1) and then provide convergence results for the parameter estimation technique.In Section 3, we give ample numerical and statistical results.Some concluding remarks are made in Section 4. Approximation Scheme and Convergence Theory The following notation will be used throughout the paper: ∆x = L/n and ∆t = T /l denote the spatial and time mesh size, respectively.The mesh points are given by x j = j∆x, j = 0, 1, 2, . . ., n and t k = k∆t, k = 0, 1, 2, . . ., l.We denote by u I,k j (q) and P k (q) the finite difference approximation of u I (x j , t k ; q) and P (t k ; q), respectively, and we let We define the difference operator and the 1 , ∞ and the BV norms of u I,k by We then discretize the partial differential equation in (1.1) using the following implicit finite difference approximation with the initial condition If we define then (2.1) can be equivalently written as the following system of linear equations for where T and A k is the following block diagonal matrix: with the lower triangular matrix Note that using the assumptions on our parameters one can easily show that equation (2.2) has a unique solution satisfying u k+1 (q) ≥ 0, k = 0, 1, . . ., l − 1. The above approximation can be extended to a family of functions {U I ∆x,∆t (x, t; q)} defined by Since our parameter set is infinite dimensional, a finite dimensional approximation of the parameter space is also necessary for computing minimizers.To this end, we consider the following finite-dimensional approximations of (1.4) and (1.5), respectively: and each of which is minimized over Q M , a compact finite-dimensional approximation of the parameter space Q.In order to establish the convergence results for the parameter estimation technique, we use a similar approach to that in [3], which is based on the abstract theory in [18]. Theorem 2.1 Let q i = (q 1,i , q 2,i , . . ., q N,i ) and suppose that for each I, q I,i → q I in Q I and denote the solution of the finite difference scheme, and let u(x, t; q) = (u 1 (x, t; q), u 2 (x, t; q), . . ., u N (x, t; q)) be the unique weak solution of our problem with initial condition and parameter q, then U From the fact that Q I is compact and the results of [4], there exist positive constants c 1 , c 2 , c 3 , c 4 such that for each I = 1, 2, . . ., N , we have , where r > s.Thus, for each I there exists a BV ([0, L] × [0, T ]) function ûI (x, t) such that U I ∆x i ,∆t i (x, t; q i ) → ûI (x, t) in L 1 (0, L) uniformly in t.Hence, from the uniqueness of bounded variation weak solutions stated in [4], we only need to show that û(x, t) = (û 1 (x, t), û2 (x, t), . . ., ûN (x, t)) is the weak solution corresponding to the parameter q.To this end, we multiply the first equation of (2.1) by Multiplying the above equality both sides by ∆x i ∆t i and summing over j = 1, 2, . . ., n, k = 0, 1, . . ., l − 1, we find Since g I,k,i n = 0 and q I,i → q I as i → ∞ in Q I , passing to the limit we have Thus, û(x, t) is the weak solution corresponding to the parameter q. Since the logarithm function is continuous on [1, ∞), as an immediate consequence of Theorem 2.1, we obtain the following: Corollary 2.2 Let U ∆x,∆t denote the numerical solution of (2.1) with parameter q i → q and ∆x i , ∆t i → 0. Then In the next theorem, we establish the continuity of the approximate cost functional, so that the computational problem of finding approximate minimizer is well-posed. Theorem 2.3 Let ∆x and ∆t be fixed.For each q I ∈ Q I , let U I ∆x,∆t (x, t; q) denote the solution of the finite difference scheme, and q I,i → q Proof.Define {u I,k,i j } and {u I,k j } to be the solution of the finite difference scheme with parameter q i and q, respectively.Let v I,k,i j = u I,k,i j − u I,k j , then v I,k,i j satisfies the following: for 1 ≤ j ≤ n, and where P k,i denotes P k (q i ).Multiplying both sides of (2.6) by sgn(v I,k+1,i j )∆x and summing over j = 1, 2, . . ., n, we obtain (2.8) Using the fact for any a j with a j ≥ 0, j = 0, 1, 2, . . ., n, we have (2.9) By (2.7), we have (2.10) Summing (2.8) over I = 1, 2, . . ., N , and using (2.9) and (2.10), we obtain Noticing that g I,i (x j , P k,i ) − g I (x j , P k ) ≤ g I,i (x j , P k,i ) − g I,i (x j , P k ) + g I,i (x j , P k ) − g I (x j , P k ) , we have from (H4) the following: Similarly, we can show that max Furthermore, straightforward computations yield Hence, from (H4) we obtain max Then, we have Since for each k, ρ k,i → 0 as i → ∞, the desired result easily follows from this inequality. Theorem 2.4 Suppose that Q M is a sequence of compact subsets of Q.Moreover, assume that for each q ∈ Q, there exists a sequence of q M ∈ Q M such that q M → q as M → ∞.Then the functional J ∆x,∆t has a minimizer over Q M .Furthermore, if q i M denotes a minimizer of J ∆x i ,∆t i over Q M and ∆x i , ∆t i → 0, then any subsequence of q i M has a further subsequence which converges to a minimizer of J . Proof.The proof of this theorem is a direct application of the abstract theory in [18], based on the convergence of J ∆x i ,∆t i (q i ) → J (q). Numerical Results In this section, we present ample numerical simulations and statistical results.In all of the simulations below we assume L = 1, T = 1, and C I (t) = 0 for I = 1, 2, . . ., N . In subsections 3.1 and 3.2, we assume N = 1 and that all the parameters are known except for β.To estimate β we use data which are generated computationally as follows: Let and we solve (2.1) and (2.3) for U ∆x,∆t (x, t).We set the data where ε k is a random sample from a normal random number generator with mean zero and standard deviation σ = 0.02. 3.1 1 − D linear estimation problem for finite dimensional parameter space when N = 1 In our first example we assume that β is of a separable form given by β(x, P ) = b(x) exp(−3P ), where b(x) = µx(1 − x ν ) with µ and ν two unknown constants to be identified.Hence, the solution to our least-squares problems involves identifying the two constants µ and ν from a compact subset of R 2 + so as to minimize the least-squares cost functional In order to test the performance of the parameter-estimation technique when no infinite dimensional effects are present, in Figure 1 we choose ∆x = ∆t = 0.005 for both generating the data and the numerical solution (2.3) in the least-squares problem.This avoids the infinite-dimensional effect of the partial differential equation given in (1.1).In fact, if the noise is removed from the data, and the parameters µ and ν are known, then numerically solving our model produces the exact data. In Figure 2 we use ∆x = ∆t = 0.005 to generate the data while we use ∆x = ∆t = 0.01 for the numerical solution (2.3) in the least-squares problem.Thus, in this case the data are not exactly attained by our model even if the noise is removed (an error is present due to the finite-dimensional approximation of our infinite-dimensional model).The results of Figure 2 are obtained by using the same values for the rest of the parameters as those of Figure 1. A similar format for presenting the results of 1000 inverse problem calculations was used in Figure 1 and 2. The left part of each of the figures represents the S (for our case S = 1000) numerical results for the estimated parameter b s (x) (s = 1, 2, . . ., S) versus the exact b(x), where these 1000 distinct numerical results graphed were obtained by solving 1000 inverse problems, each of which corresponds to a given noise sample { k }.The right part represents the figure of the corresponding 95% confidence interval (dashed line) versus the exact b(x) (solid line), where the 95% confidence interval is obtained by choosing the band between the upper 2.5% and lower 2.5% of these 1000 numerical results.Table 1 provides denotes the sampling standard error at the point x.Note that this is simply the usual asymptotic formula for the pointwise standard error (e.g., see p. 28, 37 of [21] and p. 308 of [45]). Although the estimates in both figures are good, the results in Figures 1-2 and Table 1 suggest that infinite-dimensional effects can lead to a slightly under biased estimator. We suspect that this bias depends on the choice of the numerical scheme used for solving the infinite-dimensional partial differential equation model.Here we are using an upwind scheme for approximating the model and a right-hand sum for approximating all the integrals involved.This biased estimator may be improved if, for example, a centered finite difference approximation is used together with a trapezoidal rule for integration.The above statistical results (essentially on how measurement error affects estimates) are based on a large number of numerical simulations (somewhat in the spirit of Bayesian based MCMC calculations used to estimate means and variances in a probability distribution from "experimental" data).Any estimate of model parameters from data can also be accompanied by an estimate of uncertainty using standard regression formulations from statistics [21].Thus, in the remaining part of this subsection, we present a statistical based method to actually compute the variance in the estimated model parameters q = (µ, ν).x AB(x) RAB(x) SE(x) 0.1 -0.0037 -0.6870 0.0749 0.2 -0.0092 -0.9580 0.0993 0.3 -0.0107 -0.8463 0.0975 0.4 -0.0079 -0.5497 0.0860 0. 1 and Figure 2, respectively. Substituting P (t i ; q), P µ (t i ; q) and P ν (t i , q) for P (t i ; q), P µ (t i ; q) and P ν (t i ; q) in (3.1), respectively, we obtain the following approximation of X(q): 1 + P (t 1 ; q) P ν (t 1 ; q) 1 + P (t 1 ; q) P µ (t 2 ; q) 1 + P (t 2 ; q) P ν (t 2 ; q) Under standard assumptions of classical nonlinear regression theory, we know that if ˆ i ∼ N (0, σ 2 ), where ˆ i is the difference between observation and model at time t i , then the least-squares estimate q * is expected to be asymptotically normally distributed.In particular, for large samples, we may assume where q 0 is the true vector of parameters and σ 2 {X T (q 0 )X(q 0 )} −1 is the true covariance matrix (see [21], Chapter 2). Since q 0 and σ 2 are not available, we follow a standard statistical practice [5]: substitute the computed estimate q * for q 0 and approximate 2) to obtain the standard deviation for our estimates.In particular, if then we take V 11 and V 22 to be the standard deviation for parameters µ and ν, respectively.The following two tables are the standard deviation of µ and ν for the results of the first eight numerical simulations of Figure 1 and Figure 2, respectively.2. Table 4 provides the average standard deviation of µ and ν for the results of all the 1000 numerical simulations of Figure 1 and Figure 2, respectively.We note that in most practical situations using experimental data, one does not expect to have 1000 experiments performed.But the above procedures will produce estimates of variances even in the case when one has only one data set!3.2 1 − D linear estimation problem for infinite dimensional parameter space when N = 1 In this example, we assume that β is of a separable form given by β(x, P ) = b(x) exp(−3P ), where b(x) is an unknown parameter that we want to identify. Let Choose the parameter space Q = D. Clearly, by Arzela-Ascoli Theorem [33] Q is compact in C[0, 1].We approximate the infinite dimensional parameter space as follows: For M a positive integer and b ∈ Q, we set where φ i M (x; 0, 1) are the linear spline functions on a uniform mesh of the interval [0, 1].These are defined by . It can be readily argued that lim then the solution of our finite dimensional identification problem involves identifying the + so as to minimize the leastsquares cost functional (2.4). In order to indirectly implement the compactness constraints of Q, we use a regularized least squares cost functional of the form where α > 0 is the regularization parameter. The left part of each of the following figures again represents the S (=1000) numerical results of the estimated parameter versus the exact parameter b(x).The right part represents the figure of the corresponding 95% confidence interval (dashed line) versus the exact b(x) (solid line).The tables provide statistical results for the corresponding graphs. Effect of infinite-dimensional model on parameter estimate. In Figure 3 we use ∆x = 0.005 and ∆t = 0.005 to generate the data and the numerical solution (2.3) for the least-squares problem.This removes the infinite-dimensional effect of the partial differential equation given by (1.1).However, in Figure 4 we use ∆x = ∆t = 0.005 to generate the data and ∆x = ∆t = 0.01 to compute (2.3).Thus, in this case the data are not exactly attained by our model even if the noise is removed.We observe that while the estimates in both figures are good, the results in Figures 3-4 and Table 5 suggest that infinite-dimensional effects can lead to a slightly under biased estimator.x AB(x) RAB(x) SE(x) 0. 3 and Figure 4, respectively. Effect of regularization parameter α on parameter estimate.In Figures 5 and 6 we change the parameter α while keeping the rest fixed.Clearly, low regularization parameter leads to relatively bad estimates although the estimator in this case seems to be the least biased (see Figure 5 and left part of Table 6).Increasing the value of α leads to better parameter estimates, but the estimator becomes more under biased (see Figure 6 and right part of Table 6).If this value is increased more, the estimator is more biased.Also the parameter estimate becomes worse than before.This suggests, not surprisingly, that there is an optimal choice for the parameter α which produces the best results for the parameter estimates.5 and Figure 6, respectively.In this section, we assume N = 2 and that all the parameters are known except for β 1 and β 2 .To estimate β 1 and β 2 , we assume that they are of a separable form given by β 1 (x, P ) = b 1 (x) exp(−P ) and β 2 (x, P ) = b 2 (x) exp(−P ), respectively, where b 1 (x) and b 2 (x) are unknown parameters to be identified.To estimate b 1 (x) and b 2 (x), we use data which are generated computationally as follows: Let γ I,J = 1, I = J 0, I = J for Figure 7 and γ I,J = 0.5, I, J = 1, 2 for Figure 8, u I,0 (x) = 3 exp(−2(x − 0.1) 2 ), and for the parameters g I , m I and β I we use the following choice of functions: and solve (1.1) for U I ∆x,∆t (x, t), I = 1, 2. We set the data 7 and 8, where ε I,k and ε k both are the random sample from a normal random number generator with mean zero and standard deviation σ = 0.02. We choose the parameter space We approximate the infinite dimensional parameter space as follows: For M 1 , M 2 positive integers and any (b Clearly, lim then the solution of our finite dimensional identification problem involves identifying the + so as to minimize the least-squares cost functional (2.4) or (2.5). In order to indirectly implement the compactness constraints of Q, we still use the regu-larized least-squares cost functional.For Figure 7 we use the form and for Figure 8 we use the form where α I > 0, I = 1, 2 are the regularization parameters and m = 100 for Figures 7 and 8. In the rest of our simulations we use ∆x = ∆t = 0.005 to generate the data and ∆x = ∆t = 0.01 to solve the least-squares.Thus, in these cases the data are not exactly attained by our model even if the noise is removed.Note that the results in Figure 7 and Table 7 are slightly better than those in Figure 8 and Table 8.This is expected since in Figure 7 we are sampling data for each of the two populations, which provides more information than sampling the sum of the two populations only, as is the case in Figure 8. Also note that in both of these figures we let M = M 1 = M 2 = 10. Concluding Remarks In this paper we have developed a numerical technique for identifying unknown parameters in a general size-structured population model.A main focus of the paper is on a statistical study of the parameter estimation technique.This was carried out by calculating pointwise standard errors on the estimated parameters (functions) via use of thousands of numerical experiments. Several conclusions can be drawn from our studies.1) The method discussed above seems to perform well and produce good confidence intervals for the parameters.2) When the infinite dimensional effects of the model and the parameter space are removed, the resulting numerical and statistical values suggest that the least-squares technique produces very good unbiased parameter estimates.3) The type of numerical scheme used for approximating the infinite-dimensional model as well as the parameter space may influence the bias in the parameter estimation technique.4) The commonly used regularization term is crucial for enforcing compactness and obtaining better estimates.However, it may also introduce more bias in the estimator. We note in closing that the system (1.1) investigated in this paper is a special case of the measure dependent aggregate dynamics problems formulated in [6] wherein individual (uncoupled) dynamics are not available.Inverse problems for such systems have been investigated in a number of applications including cellular level HIV modelling [7], hysteresis in viscoelastic materials [8,9], shear waves in biotissue [10], and electromagnetic interrogation in complex materials [11].In a more general formulation (currently under investigation by the authors), one has a probability distribution F of individual parameters q(x, P ) = q = (g, m, β, C) on an admissible set Q.The system (1.1) is replaced by a continuum of systems for u(x, t; q(x, P )) with the total population P (t; F ) given by P (t; F ) = Q L 0 u(x, t; q)dx dF (q) = Q L 0 u(x, t; q)dx f (q)dq, the latter equality holding if F has a density f .The aggregate dynamics for u depend explicitly on F through the dependence of the individual rate parameters (g, m, β, C) on the total population P . If F is a discrete measure with N atoms at q J of mass f J , then we have f J L 0 u(x, t; q J )dx. 2 ) is the total population at time t.The function g I denotes the growth rate of an individual in the Ith population, m I denotes the mortality rate of an individual in the Ith population, and β I is the reproduction rate of an individual in the Ith population.The function C I represents the inflow rate of the Ith population of zero-size individuals from an external source (e.g., in a tree population model seeds moved by wind). statistical results for the corresponding graphs, where AB(x) = 1 S S s=1 (b s (x)−b(x)) denotes the average bias for all approximations at x, RAB(x) = 100 AB(x) b(x) denotes the relative average bias for all approximations at x and Figure 1 : Figure 1: ∆x = ∆t = 0.005 to generate the data and solve the least-squares.For the left part of the figure, each of the grey lines (....) denotes a distinct result for a given sample { k }. Figure 2 : Figure 2: ∆x = ∆t = 0.005 to generate the data and ∆x = ∆t = 0.01 to solve the leastsquares.For the left part of the figure, each of the grey lines (....) denotes a distinct result for a given sample { k }. Figure 1 Table 4 : Figure 1 Figure 2 µ 1.1921 1.9197 ν 0.4566 0.8572Table 4: Average of standard deviation for all the results of the numerical simulations of Figures 1-2. Figure 3 : Figure 3: M = 10, α = 3e − 5.Each of the grey lines (....) of the left part of the figure denotes a distinct result for a given sample { k }. Figure 4 : Figure 4: M = 10, α = 3e − 5.Each of the grey lines (....) of the left part of the figure denotes a distinct result for a given sample { k }. Figure 5 : Figure 5: M = 10, α = 1e − 5.Each of the grey lines (....) of the left part of the figure denotes a distinct result for a given sample { k }. Figure 6 : Figure 6: M = 10, α = 5e − 5.Each of the grey lines (....) of the left part of the figure denotes a distinct result for a given sample { k }. 3.3 1 − D linear estimation problem for infinite dimensional parameter space when N = 2 The upper-left part and the lower-left part of the following two figures represent the S (=1000) numerical results of the estimated parameters b 1 M 1 (x) and b 2 M 2 (x) versus the exact parameters b 1 (x) and b 2 (x), respectively.The upper-right part and the lower right part represent the figures of the corresponding 95% confidence interval (dashed line) versus the exact b 1 (x) and b 2 (x) (solid line), respectively.The tables provide statistical results for the corresponding graphs. Table 1 : Left and right tables are statistical results for Figure Table 2 : Standard deviation for the results of the first 8 numerical simulations of Figure1. Table 3 : Standard deviation for the results of the first 8 numerical simulations of Figure Table 5 : Left and right tables are statistical results for Figure Table 6 : Left and right tables are statistical results for Figure
8,093.4
2005-03-01T00:00:00.000
[ "Biology", "Engineering", "Mathematics" ]
Vulnerability of Sustainable Islamic Stock Returns to Implied Market Volatilities: An Asymmetric Approach €ere has been increasing interests in the sustainable way of investing as enjoined by several sustainability initiatives. However, investors require eˆective portfolio diversication at various market conditions (stress, benign, and boom) and would consider sustainable equities to the extent that they aid in the minimisation of portfolio risks. As a result, a better way investors can mitigate portfolio risk is by forming portfolios with relevant volatility indices as enshrined in extant literature. It becomes necessary to investigate the susceptibility of Islamic stocks in a sustainable way to shocks from volatility indices to enhance eˆective portfolio decisions. In this regard, we investigate the asymmetric eˆect of implied volatility indices on sustainable Islamic stocks across diˆerent market conditions. Hence, the quantile regression and quantile-on-quantile regression techniques are employed. €e study discovered an asymmetric inŽuence of volatility on sustainable Islamic stock returns at various quantiles. Furthermore, most volatilities’ asymmetric eˆects were generally inversely associated to sustainable Islamic stock returns, implying diversication benets across market outcomes. Also, with the exception of the extreme quantiles, there is a causal eˆect of volatilities on Islamic stock returns for most quantiles. It seems to reason that ordinary market outcomes, rather than market stress or boom, have a greater impact on causal estimates for our quantile regression model. Introduction e popularity of Islamic stocks has heightened over the years to induce the attention of investors, policymakers, researchers, asset managers, etc. is is as a result of their increased market performance relative to the conventional way of investing, even in times of crises. Investing in Islamic stocks further grants the opportunity to channel ones religious belief [1,2]. Aside from this, the high levels of integration among most conventional assets (see [3][4][5]) requires that e ective portfolios are formed with Islamic stocks due to the latter's high extent of satisfying investors' risk tolerance in times of crises [2,6]. However, prior studies utilising Islamic stocks alone divulge that Islamic stocks exhibit similar patterns of high interactions [7,8] depicting low degrees of diversi cation in the future. is is not surprising because, although Islamic stocks are mostly insulated from existing crises, their similar response to shocks weaken diversi cation potentials at various investment horizons. is has brought about many empirical studies to investigate diversi cation bene ts among conventional and Islamic assets simultaneously [2,6,[9][10][11][12]. Findings from these studies generally divulge some considerable levels of interactions or contagion (increased in correlations after the onset of crises) among conventional and Islamic equities. Comparatively, conventional equities are found to exhibit more volatility spillovers of excessive interactions than their Islamic counterparts [2] during market stress. Insights from these studies are that it is better to diversify among conventional as well as Islamic equities rather than concentrating on a particular asset class. is opens up a gap to further assess the asymmetric effect of implied market volatilities which are forwardlooking. Application of implied volatilities has gained massive attention with conventional stocks [13,14] and cryptocurrencies [15][16][17], as well as commodities [5,18,19]. It is normally found from these studies that negative shocks are mostly transmitted from the implied market volatilities to these assets demonstrating diversification, hedge, or safe haven benefits depending on the market conditions as a result of portfolio formation. It becomes pertinent to examine the asymmetric effect of implied volatilities on Islamic stocks which have gained investors' attention over time [20][21][22]. is is particularly important because Islamic stocks are most likely than not susceptible to external shocks [22,23]. is empirical discourse would highlight the relevance of forming reliable portfolios among market volatility indices as external shocks transmitters and Islamic stocks. It would also give a chance to the existing investors of Islamic stocks to reconstruct or rebalance their portfolios to incorporate implied market volatilities. Also, observing the asymmetric effect of implied market volatilities provides existing investors of Islamic stocks the opportunity to hedge against shock transmission regarding contagion effect to either redeploy or scale up their investments. Hence, a nascent and fledgling body of literature investigates the nexus between implied volatilities and Islamic stock returns. For instance, Karim and Masih [20] through the wavelet approach investigated the asymmetric impact of realised and implied crude oil volatility on Islamic stock returns. However, the study of Karim and Masih [20] was limited to the oil market, thereby creating a myopic view of the nexus. e closest study to ours is that of Chang et al. [1], but did not consider the influence of implied volatilities in the nexus. Some of the implied volatilities that have spillover effects on most financial markets around the world include the US VIX as a significant measure of investor fear and expectations [14], implied volatility in the energy markets, emerging markets volatility, and developed markets volatility. ese implied volatilities have been touted to have ravaging impact on financial markets [5,13,16,18,24] from which Islamic stocks could be more sensitive as a result of contagion effect from markets interactions. is is because, recently, Islamic stocks are becoming linked to shocks from implied volatilities [22,24]. is can be traced from the behaviour of conventional investors and fund managers who seek to invest in Islamic stocks to minimise losses during crises. In times of crises, firms with a relatively huge indebtedness tend to receive most of the shocks, wherein, Sharia-compliant firms operate around a certain interest-bearing debt threshold of about 33% in accordance with the screening method of Dow Jones Sharia, to mention a few. is filtering criterion mitigates financial integration between Islamic stocks and implied volatilities to become less positively related to harness diversification benefits. However, as found by Karim et al. [25], Islamic stocks are less exposed to implied volatility or fear index than their conventional counterparts due to the former's distinct screening features to be more decoupled from the risks facing conventional markets. Conversely, Tissaoui and Azibi [24] and Shahzad et al. [26] documented that both Islamic and conventional stocks are exposed to global risk factors similarly and achieving strong linkages with their conventional counterparts. is leads to the rejection of the decoupling hypothesis of both Islamic and conventional stocks. ese inconsistencies render a further assessment of the susceptibility of most Islamic stocks (Sharia-compliant) to several relevant implied volatilities worthwhile of examination to enhance investors' understanding and confidence. It is known that implied volatilities drive interconnectedness among financial markets including Islamic stocks during stressful times [22]. What is not known is the susceptibility of Islamic stock returns to implied volatilities at market conditions of stress, normal, and boom using the quantile regression approaches? at is, prior studies conducted on the susceptibility of Islamic equities to implied volatilities are mostly silent on the use of the quantile regression approaches (see [20][21][22][23][24]26]). However, the quantile regression approaches, quantile regression (QR), and quantile-on-quantile regression (QQR) offer the opportunity to capture the nonlinear, asymmetry, and nonstationary influence of changes in implied volatilities [13] and Islamic stock returns (see [27,28]), as well as the effect during bearish, normal, and bullish market situations. e traditional QR and regular least squares approaches alone do not display these properties as good as QQR does. Furthermore, the market condition of Shariah stocks may not be the same as volatility indices. us, Shariah stocks and volatility indices may witness different market conditions and, hence, analysing the asymmetric relationships between the two assets across their varied market conditions is important. We Discrete Dynamics in Nature and Society 3 firms around the globe to have a pivotal mandate to disclose their performance on sustainability issues while putting up a sustainable behaviour. However, since investors desire to form reliable portfolios, diversification benefits with other assets become their utmost flight to quality. Second, the asymmetric effect of implied market volatilities which have gained significant interest from a nascent and fledgling body of academic literature is utilised in tandem with the sustainability Islamic stocks. We select four relevant implied market volatilities to integrate shocks from developed market, emerging market, the US market, and energy market. Most of these volatilities have been touted to be significant risk transmitters in several conventional assets [5,13,14,18], but a few studies on Islamic stocks utilise specific or few implied volatility indices [20][21][22] to ensure a myopic view on the nexus. We do this to draw insights into effective portfolio reconstructions, redeployment, and rebalancing towards risk minimisation strategies. ird, to examine the asymmetric effect of the implied market volatilities on Islamic stocks across market conditions (stress, benign, and boom) [29], the quantile regression as well as quantile-on-quantile regression techniques are employed. Moreover, the robustness of these estimates would hinge on the application of the causality in mean at various quantiles. ese are presented to clearly divulge the heterogeneous [30] behaviour of markets and their participants across market conditions of stressed, normal, or boom [11,[31][32][33][34]. We found asymmetric influence of volatility on sustainable Islamic stock returns at various quantiles. Furthermore, most volatilities' asymmetric effects were negatively related to sustainable Islamic stock returns, implying diversification benefits across markets conditions. Moreover, with the exception of the extreme quantiles, there was a causal effect of implied market volatilities on Islamic stock returns at most quantiles. e next of this section is arranged as follows. In Section 2, we present the study's methodology whereas Section 3 contains results and discussion. Section 4 concludes the study with some implications, recommendations, and suggestions for further studies. is makes the indices relevant to withstand shocks and support diversification with other financial assets. All the seven indices are needed for this current study because they would provide better information on most Islamic sustainability or Shariacompliant equities across different regional blocs to examine their susceptibility to shocks, while encouraging regional policy decisions. Materials and Methodology Also, we used four implied volatilities to gauge investors' fear into the Islamic market. e implied volatilities are the CBOE Emerging Markets Etf Volatility (EMV), Chicago Board Exchange Volatility Index (USVIX), Dorsey Wright Developed Market Momentum and Low Volatility (VDM), and CBOE Energy Sector Etf Volatility (EnergyV). e four implied volatilities would comprehensively give us the opportunity to investigate their heterogeneous as well as asymmetric impact on the seven selected Islamic stocks for effective portfolio reconstructions, redeployment, and rebalancing towards risk minimisation strategies. e daily data span 11 th January, 2017 to 11 th February, 2022, yielding up to 1264 observations. e suggested period was chosen based on the availability of consistent data at the start and the end locations. Regardless, this time period includes significant economic events such as the Brexit, crude oil price crash in history, and the COVID-19 pandemic. e data on sustainability Islamic equities were Note. * , * * , and * * * denote significance at 10%, 5%, and 1%, respectively. Note. * , * * , and * * * denote significance at 10%, 5%, and 1%, respectively. Discrete Dynamics in Nature and Society 5 obtained from RobecoSAM database. e volatility indices were obtained from investing.com. We utilised the natural logarithmic returns for each market indices. Quantile-on-Quantile Regression (QQR). e conditional quantile link between two or more variables is empirically justified using the QQR technique, which is a nonparametric variant of the traditional quantile regression (QR). e QQR is suited for studying bearish and/or bullish interrelations between the returns on Islamic stocks and volatility indices since quantiles can express asymmetry among high and low logarithmic price patterns. We show susceptibility of the Islamic stocks to volatility indices which are non-parametrically expressed as where SR t and VI t respectively, represent the returns of Islamic stock and volatility indices at period t, β θ (•) is the slope of the connection between the two assets at any Note. * , * * , and * * * denote significance at 10%, 5%, and 1%, respectively. Note. * , * * , and * * * , * * * denote significance at 10%, 5%, and 1%, respectively. 6 Discrete Dynamics in Nature and Society conditional level, the θth quantile of SR t in equation (1) that is conditionally distributed is denoted by θ, and u θ t is the quantile in error which is made to have a θth conditional quantile. Discrete Dynamics in Nature and Society Equation (2) can now be substituted into equation (1) to arrive at equation (4) as where ( * ) yields the conditional quantile of θth of returns on VI in equation (4). It additionally portrays the true susceptibility of the SR(τth) to shocks from the quantile of the VI(θth) in respect of equation (4), of the parameters β 0 and β 1 with indices represented by θ and τ. Similar to the case of OLS, we apply an analogous minimisation to produce the following equation where the quantile loss function, ρ θ (u), is represented as ρ θ (u) � u(θ − I(u < 0)), i is the function of indicator, the Note. * , * * , and * * * denote significance at 10%, 5%, and 1%, respectively. Note. * , * * , and * * * denote significance at 10%, 5%, and 1%, respectively. 8 Discrete Dynamics in Nature and Society kernel density function (KDF) is denoted as K(•), and h is the bandwidth parameter of the KDF. e observations of VI τ is weighted by the KDF where the minimal weights are inversely connected to the distribution of VI t in the form of F n (VI t ) � (1/n) n k�1 I(VI k < VI t ). Following the specifications of Sim and Zhou [35], the bandwidth for the quantiles we employ in this study for the QQ breakdown is defined as h � [0.05 to 0.95]. e smoothness of the estimated results is contingent on the bandwidth, which represents the divisions of the quantiles. Smaller bandwidths are recommended over larger bandwidths because larger bandwidths may lead to biased estimates of the coefficients. Preliminary Analysis. e time series plots of both price and returns for sustainable Islamic stocks (in black) and volatility indices (in red) are presented in Figure 1. Most of the Islamic stocks trend upwards prior to mid-2020, plunge in the mid-2020, and skyrocketed afterwards. It can be observed that markets rebound after the COVID-19 pandemic, demonstrating high market performance, supersede that of prior to the pandemic. Accordingly, we find prospects of extreme markets rebound after the onset of a shock within the sustainable Islamic stock markets. Conversely, except for the developed market volatility index, the remaining volatility indices are inversely related to the Islamic stock market, indicating a potential hotspot for portfolio diversification, hedge, or safe haven. Also, the plunge in prices at the COVID-19 pandemic is shown as shocks in the returns plots of the sustainability Islamic stocks. Generally, all the returns series exhibit volatility clustering. We present Table 1 to examine the behaviour of individual financial time series over the sampled period. It can be seen that all the variables have positive mean suggesting potential for increased market performance. Also, there are fewer variations in the data and tendency for more negative values than higher values in addition to a leptokurtic distribution. We confirm that the data distribution of all financial time series demonstrates non-normality from the Jarque-Bera statistics. Additionally, we observe from the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test that all the returns series are stationary with a failure to reject the null hypothesis of stationarity (p-value > 0.05). However, since all the returns series are not normally distributed, it accentuates the relevance of employing an asymmetric statistical tool capable of revealing relationships across market situations. In assessing the linearity of the financial time series, the Teraesvirta's Neural Network (TRS) test with a null hypothesis of linearity is employed. e TRS test suggests that the original returns series are nonlinear (p-value < 0.05). is further addresses the need for employing the QQR technique which is able to effectively deal with issues of asymmetry and nonlinearity relative to the traditional QR and OLS (see, e.g., [36] and references therein). Moreover, the unconditional correlation coefficients between two time series are shown in Table 2. It is clear that the correlation of all the financial time series are almost significant at the 1% level. We notice a mixture of positive and negative correlations ranging from small to large magnitudes. e negative relationships between the variables have high likelihood for diversification, and this can be found between the Islamic stock returns and most of the volatility indices. is implies that portfolio diversification Note. * , * * , and * * * denote significance at 10%, 5%, and 1%, respectively. among the sustainability Islamic stocks would do more harm than good to potential investors. Tables 3-9 present the asymmetric effect of implied market volatilities on sustainable Islamic equities. We find a significant effect of implied market volatilities on Islamic stock returns across quantiles at varying levels of significance. e susceptibilities of Islamic stock returns to market volatilities across market conditions (stressed, benign, and boom) are mostly negative, except for volatilities from developed markets. Surprising, the effect of developed market volatilities on Islamic stocks have large magnitude, and considered to be positive except for S.PGS from Table 9. e similar asymmetric coefficients at most markets conditions demonstrate the persistence of Islamic stocks to external shocks. is explains that it takes a while for Islamic stocks to respond to changes in shocks from external shocks. Quantile Regression. Comparatively, all volatility indices but developed market volatility demonstrate reduction in magnitudes from the lower quantile to the upper quantile. Suggesting that negative shocks are more prominent at stressed market outcome, whereas positive shocks are stronger at market boom for all Islamic equities. It can therefore be concluded that most sustainable Islamic stocks are vulnerable to implied market volatilities. is is partly in line with the assertion made by Haddad et al. [23] that Islamic equities are susceptible to international shocks. e significant negative effect of implied volatility from the energy market concurs with the findings of Karim and Masih [20] and Lin and Su [21]. Conversely, Lin and Su [21] found that negative shocks between implied volatility from crude oil and Islamic stocks are more prominent at higher quantiles. Moreover, outcomes generated from the current study do not absolutely deviate from the ones generated by prior studies on conventional assets as well as commodities [5,13,14,18]. It is relevant that investors of Islamic stocks form a well diversifiable portfolio with market volatilities. Also, existing investors of sustainable Islamic stocks should hedge against fluctuations in Islamic stocks having in mind the behaviour of market volatilities or redistribute their existing Islamic stock portfolios. QQR and QR Comparison. In this section, we investigate the relevance for a non-parametric asymmetric distribution among the sustainable Islamic stock returns and volatilities returns. It also gives the opportunity to infer how significant the QQR estimates are, having the knowledge of the QR estimates. Figure 2 presents the combined plots for both QQR and QR. A look at Figure 1 indicates that although [21,27,28]). Nonetheless, to some extent, the line graphs confirm the QQR except for the extreme quantiles of most relationships. It is worth noting that relative to the QR, the QQR projects a better view of the asymmetric linkages among the dependent and independent variables at varied quantiles of both variables. Hence, given that majority of the QQR estimates are confirmed by their QR counterparts, we emphasise the relevance of the chosen methodological framework. We present the three-dimensional QQR estimates in the next subsection to further address the asymmetric and nonlinear dynamics of the employed financial time series. Quantile-on-Quantile Regression. e three-dimensional asymmetric dependent nexus among Islamic stock returns and implied market volatilities is shown in Figure 3. It can be observed that lower values leading to negative values relative to higher ones are persistent with emerging market volatility and the US VIX. is implies that considering the quantile dependence structure of both Islamic stocks and implied market volatilities, it is better to diversify with the emerging market volatility and the US VIX. Accordingly, having in mind of the quantile dependence structure of the possible combinations of this study, portfolio rebalancing or redeployment is pertinent with volatilities from the energy and developed markets. Robustness. e causality in mean, as proposed by Jeong et al. [37] and advanced by Balcilar et al. [38] is employed in this study to confirm if sustainability Islamic stock returns are significantly driven by volatilities at varying levels of market conditions. Prior empirical research investigating the resilience of quantile regression has used this approach (see [13,39]). Figure 4 shows that, with the exception of the S&P Africa Frontier Shariah Index, volatility indexes have a strong causal impact on sustainable Islamic stock returns. From the lower mid quantiles to the upper mid quantiles, the causation grows stronger. is means that typical market outcomes influence causal estimates for our quantile regression model more than market stress and boom. Conclusion We contribute to the asymmetric relationship among sustainability Islamic stock returns and volatility returns across market conditions. Hence, the quantile regression and quantile-on-quantile regression techniques were employed. e causality in mean technique at various quantiles was further utilised to examine the robustness of our quantile estimates. Findings from the study revealed asymmetric effect of volatilities on sustainability Islamic stock returns at various quantiles. In addition, the asymmetric effects of most volatilities were mostly inversely related with sustainability Islamic stock returns, suggesting diversification benefits at various markets outcome. Also, we document causality from volatility to Islamic stocks at various quantiles, except for the extreme quantiles. It goes to reason that typical market outcomes influence causal estimates for our quantile regression model more than market stress or boom. Particularly for each volatility indexes, volatility index from developed markets transmits positive shocks to sustainability equity indices, except for the S&P Global 1200 Shariah (S.PGS). Hence, diversification benefit would manifest only with the S.PGS index from shocks from the developed market volatility index. On the other hand, the remaining volatility indices transmit negative shocks at various quantiles indicating the need to diversify, hedge, or seek safe haven from them. e significant asymmetric relationship among Islamic stock returns and implied market volatilities across quantiles demonstrates inefficient market dynamics exacerbated by the irrational behaviour of investors to accentuate the heterogeneous and adaptive market hypotheses. It is recommended that existing and potential investors of sustainable Islamic stocks be mindful of the heterogeneous susceptibilities of these stocks to market volatilities. It is important that they study the market at various markets condition, having in mind the potency of market volatilities. It is necessary that optimal policy interventions from these sustainable Islamic regional blocs are deployed to revamp vulnerable Islamic markets to external shocks. Further studies can assess frequency-dependent asymmetric impact of market volatilities on sustainability stocks at various investment horizons and market outcomes [12,40,41]. Data Availability e data used to support this study are available upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,338.2
2022-07-19T00:00:00.000
[ "Economics", "Business" ]
A Simulation of Energy Recycling Concept in Automotive Application Using Hybrid Approach Received May 1, 2018 Revised Jul 19, 2018 Accepted Jul 30, 2018 This paper presents development of a simulation to demonstrate a relatively new hybrid approach in improving energy resources that is applicable in automotive industry. The existing hybrid approach in automotive industry is considerably efficient in terms of energy saving by switching between fuel and electricity for energy resources. However, both energy resources confront various challenges. While the electricity resources require recharging, the fuel resources are scarce and expensive. Therefore, in this paper we aim to propose a relatively new hybrid approach, referred to as energy recycling concept equipped with coordination algorithm. To simulate the proposed energy recycling concept, a prototype of Electrical Control Unit (ECU) car is built. Then, an algorithm that coordinates battery charging is developed and integrated with the ECU. Finally, the simulation of the proposed energy recycling concept equipped with the coordination algorithm is evaluated on the prototype of the ECU car. The results show that the proposed energy recycling concept that allows switching between two sources of energy is applicable to operate the ECU car prototype. INTRODUCTION Initially, conventional vehicles were designed to use gasoline or diesel to produce energy to internal combustion engine. In the 20 th century, vehicle manufacturers start to introduce various kinds of hybrid technology. Some of the popular hybrid vehicles manufacturers include Honda, Toyota, Nissan, Lexus, Mercedes, Hyundai, Ford, and Infiniti. Like the conventional vehicles, the hybrid vehicles can also be fueled. Different from the conventional vehicles, the hybrid vehicles have an electric motor and battery. The main feature of hybrid vehicles is that it allows switching of power source between fuel and electricity to operate the combustion engine. Significantly, such hybrid vehicles offer various benefits to drivers including fuel consumption efficiency, less expenditures on fuel, less risk on air pollution and health problems [1]. Indeed, there are studies on the contribution of hybrid vehicles to overcoming air contamination and global warming [2]. Besides, the dependence on non-renewable resources such as fossil fuels should not be viewed as small issue [3]. The hybrid vehicles may be classified into Series Hybrid, Parallel Hybrid, Series-Parallel and Complex Hybrid [4]. The features for each type of hybrid vehicles can be found in [1]. The advantages of such hybrid vehicles can be summarized as follows: a) Lessen the risk of pollution [5]. b) Save cost on fuel consumption [6]. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  c) Fulfill potential market demand [7] There are initiatives to improve electric vehicle performance that can be found in [8] [9]. However, the existing hybrid technology is still prone to two major limitations: dependency on electricity source for recharging and fuel source availability. Thus, in this paper, we would like to propose a relatively new hybrid approach referred to as energy recycling concept with computer-based coordinating system (CCS) to overcome these limitations. The rest of the paper is organized as follows: Section 2 describes several components of the simulation development. Section 3 presents the results of the evaluation on the simulated prototype Finally, Section 4 concludes the simulation work and highlights a direction for future research. RESEARCH METHOD This section describes several components of the project development: the proposed energy recycling concept, the architecture of the computer-based coordinating system, the software implementation, and the hardware implementation. Figure 1 shows the proposed energy recycling concept using two batteries installed in the car prototype. The computer-based coordinating algorithm is developed to ensure the batteries will take turn to be recharged. Significantly, the strength of the proposed recycling concept with the computer-based coordinating algorithm relies in its capability to maintain power supply alternately to the batteries. Consequently the continuous charging will keep the car prototype operated longer. Figure 2 shows the proposed architecture of the car prototype to simulate the energy recycling concept with computer-based coordinating system (ERCS). Note that the ultimate goal of the proposed energy recycling concept is to avoid dependency on external resources such as petroleum and electricity recharging difficulties. Eventually, the reduced cost for end-users and efficient energy usage can be achieved. Figure 3 shows the main interface of the coordinating software applied in the ERCS. The connection menu contains drop down list of serial port that communicate with the car prototype. The Connect button is to make connection between ERCS and the car prototype. The right panel is to view the multiple voltage values and percentage in rows. Figure 4 shows a car prototype used in the simulation of the ERCS. The car prototype was built using a remote control car. Figure 5 shows the schematic design of the integrated components. The schematic design shows the flow of the components used in the prototype. The dashed line represents the signal pins between components and Arduino and the arrow lines present flow of the voltage being used and to be charged. Note that we are not building a real car here. Instead in the simulation, we use a small scale prototype to demonstrate the proposed concept. We change the invertor of hybrid to step up voltage; the generator motor to dynamo motor and potentiometer as gas/electric pedal car. Indeed, we added some other components such as NPN transistor and two channel relays. We use two batteries as main power source to running the engine that is like replacing the fuel source to batteries. Therefore, the prototype uses only electric engine powered by any of these two batteries, one at a time. The relays react as power split devices that is to change power supply coordinate by a switching algorithm. Figure 5. Schematic design of integration component The coding for the coordinating algorithm in the ERCS prototype is built in Arduino IDE and embedded into the microcontroller. The purpose of this coordinating algorithm is to enable switching of recharging on the battery with appropriate voltage values. Below are the equations (Equations 1 to 8) used in building the algorithm to coordinate the switching procedure.The descriptions of the abbreviations used in the equations can be found in Table 1. (1) Equation 2 shows conversion step of analogue reading to battery B voltage. Equation 3 shows conversion step of current voltage battery A to percentage value. Equations 7 and 8 show conversion of voltage battery B to pwm. Minimum charge voltage for each battery is 5.0 volt. Equations 5 until 8 are used to convert each battery voltage to Pulse Width Modulation (PWM) value. Moreover, the purpose of these equations are to charge the appropriate voltage and avoid overcharging. The NPN transistor will recognize the pwm value and present the value as voltage to be charged to the respective battery. Figure 6 shows the condition used in the proposed algorithm for the computer-based coordinating (CCS) system. In this algorithm, it will trigger the command according to the voltage values. It means when the car is operating, the algorithm is able to read voltage of both batteries and convert to percentage values. Then the battery with higher voltage is assigned to act as primary power supply to provide power to the DC motor. The battery with lower voltage then requires charging. Whenever the voltage of the primary power supply is getting lower and the percentage arrives to equal or less than 15%, the algorithm is able to switch the charging. The algorithm keeps coordinating whenever the car prototype is still running. However, when the car is in idle state, then there is no switching occurs. The voltage values are sent to the ERCS for real time monitoring. The voltage of the batteries are sent into a table. According to the voltage, we can observe the coordination of the auto switching procedure in recharging the batteries. RESULTS AND ANALYSIS The prototype car model has been used to simulate the proposed energy recycling concept with a computer-based coordinating system. The car prototype is evaluated in two conditions: (1) monitoring batteries voltage charging without the CCS algorithm and (2) monitoring batteries voltage charging with the CCS algorithm. Monitoring batteries voltage charging with CCS The evaluation took place about 46 minutes. The voltage is measured to observe the switching power supply, also switching recharging whenever the percentage of used battery is less than or equal to 15%. According to Table 2, initially, the algorithm indicates that percentage voltage of battery A is higher than percentage voltage battery B, thus battery A is used to be primary power supply and the battery B as the secondary power supply. Thus, battery B is charged. The switching of power supply happened in minute 21. Now, battery B became primary power supply and battery A became the secondary power supply. Thus battery A is recharged while battery B is used to supply power to the DC motor. The DC motor runs with 5.0 volt with 1.0 amperage. In minute 33, the switching power supply and switching recharging occur again. The battery A once again became prime power supply and the battery B became secondary. If the test is to be continued, then the switching of power supply and switching recharging of both batteries continue to occur. Thus the power source will never run out. The green colour indicates that the battery is used to supply power to the DC motor; the yellow colour indicates charging mode of the battery and the blue colour shows the switching power supply and switching recharging battery simultaneously. Monitoring batteries voltage charging without CCS The observation is also performed for 72 minutes to monitor batteries voltage charging without the CCS. There is no input to trigger which battery has higher or lower voltage. Indeed, the Arduino randomly choose whichever battery to act as power supply. According to table 3 and figure 8, the DC motor runs using battery B and the switching of power supply and switching of recharging on the battery do not occur. Figure 8 shows battery B became drain until the voltage runs out. The battery A is not used until the end of the observation. At the end of this evaluation phase, the DC motor of in the car prototype is finally stopped because the power has run out. The red cells in Table 3 indicates idle state of battery A and the yellow cells indicate battery B is used as power supply. The evaluation result shows that switching of recharging between the two batteries do not occur if the CCS algorithm is not applied. CONCLUSION In this paper, a relatively new hybrid approach, known as energy recycling concept has been proposed. To demonstrate the proposed energy recycling concept, two major tasks were performed: a car prototype was built (hardware) and a computer-based coordinating algorithm (software) was developed. Both components, the hardware and software were integrated. The evaluation results show that the proposed energy recycling concept is successfully performed. The implemented CCS algorithm is able to perform switching of primary power supply and switching of rechargeable battery. Note that in this paper, the proposed idea of energy recycling concept has been evaluated in a simple simulation of a car prototype. However, the principal idea in the proposed energy recycling concept is seen to be applicable in automotive industry that can reduce dependency of fuel availability and electricity resources. Notably this project is supported by the Fundamental Research Grant Scheme, vot 1609 from Ministry of Higher Education, Malaysia.
2,650.2
2018-10-01T00:00:00.000
[ "Engineering" ]
Enhanced observation time of magneto-optical traps using micro-machined non-evaporable getter pumps We show that micro-machined non-evaporable getter pumps (NEGs) can extend the time over which laser cooled atoms canbe produced in a magneto-optical trap (MOT), in the absence of other vacuum pumping mechanisms. In a first study, weincorporate a silicon-glass microfabricated ultra-high vacuum (UHV) cell with silicon etched NEG cavities and alumino-silicateglass (ASG) windows and demonstrate the observation of a repeatedly-loading MOT over a 10 minute period with a single laser-activated NEG. In a second study, the capacity of passive pumping with laser activated NEG materials is further investigated ina borosilicate glass-blown cuvette cell containing five NEG tablets. In this cell, the MOT remained visible for over 4 days withoutany external active pumping system. This MOT observation time exceeds the one obtained in the no-NEG scenario by almostfive orders of magnitude. The cell scalability and potential vacuum longevity made possible with NEG materials may enable inthe future the development of miniaturized cold-atom instruments. Introduction Laser cooling [1][2][3][4] has permitted groundbreaking advances in fundamental and applied physics by greatly reducing the velocity of atoms, giving access to the detection of narrow atomic resonances 5,6 and making possible the preparation of pure quantum states 7 . The low momentum ensembles available through laser cooling have led to the development of atomic devices and instruments with unrivaled precision and accuracy, including microwave 8 and optical [9][10][11][12][13] atomic clocks, quantum sensors 14 , magnetometers 15 and inertial sensors based on matter-wave interferometry 16 . The workhorse of cold-atom experiments is the magneto-optical trap (MOT) 17 , in which a balanced optical radiation force cools atoms and a spatial localization is created by a magnetic field gradient. The MOT is typically created in an actively-pumped glass-blown cell in which a modest alkali density and ultra-high vacuum (UHV)-level are sustained. In recent years, significant efforts have been made to address the scalability of cold-atom instruments 18,19 , even resulting in the commercialization of compact cold-atom clocks and sensors. Designs for chip-scale cold-atom systems have also been proposed 20 and demonstrated, including novel ways of redirecting laser beams to trap atoms such as the pyramid MOT 21,22 and grating-MOT (GMOT) [23][24][25] , as well as density regulators 26,27 and low-power coils 28 . Progress has also been recently reported on the development of chip-scale ion pumps 29 . However, the high voltages and large magnetic field in the presence of the atomic sample remain unfavourable for compact atomic clocks and precision instruments. Further miniaturisation of the vacuum cell is possible through the combination of passive pumping techniques and a suitable choice of vacuum materials. For example, micro-electro-mechanical-systems (MEMS) vapor cells, comprised of etched silicon frames and anodically bonded glass windows, provide a means to mass production and micro-fabrication of the vacuum apparatus. Such vapor cells [30][31][32][33][34][35] are now a mature technology, reliable and widely used in chip-scale atomic devices 36 , including commercial products 37,38 . Recently, such micro-fabricated cells have demonstrated compatibility with laser cooling through the formation of an actively-pumped MOT in a MEMS platform 39 . However, in the absence of active pumping, the residual background pressure in chip-scale cells is rapidly degraded by gas permeation through the glass substrates 40 , material out-gassing, and residual impurities generated during the alkali generation and cell bonding processes 41 . In 2012, Scherer et al. reported the characterization of alkali metal dispensers and NEG pumps in UHV systems for cold-atom sensors 42 and showed that a MOT could be sustained for several hours in a 500 cm 3 volume pumped only with NEGs. In other studies, the activation of thin-film 43 or pill-type NEGs 44 were demonstrated to mitigate the concentration of impurities in hermetically sealed micro-machined vapor cells. In this paper, a 6-beam MOT, detected first in a MEMS cell with ASG windows and later in a glass-blown borosilicate cell, is used to study the benefit of laser activated NEGs on the MOT observation time and vacuum pressure longevity with purely passive pumping. Key experimental parameters including the number of atoms trapped in the MOT, the Rb vapor pressure and the non-Rb background pressure are routinely monitored. The MOT observation time, defined as the time taken for the MOT to decay to the detection noise-floor level, was measured to increase by 2 orders of magnitude, up to 10 minutes, after activation of a single NEG in the MEMS cell. An additional test, performed in the conventional borosilicate cuvette-cell with 5 similar NEGs, led to the observation of a MOT for more than 4 days in a regime of pure passive-pumping. This MOT observation time is almost five orders of magnitude longer than in the no-NEG scenario. These results are encouraging for the development of UHV MEMS cells compatible with integrated and low-power cold-atom quantum sensors. Figure 1(a) shows a simplified schematic of the experimental setup. At the center of the laser-cooling system is an activelypumped micro-fabricated cell. The cell consists of a 40 mm × 20 mm × 4 mm silicon frame etched by deep-reactive ion-etching (DRIE) and sandwiched between two 40 mm × 20 mm × 0.7 mm anodically-bonded low helium permeation aluminosilicate glass wafers (ASG-SD2-Hoya 1 ) 40 . A 6 mm circular hole is cut through one of the glass windows by laser ablation before anodic bonding to the silicon, allowing the MEMS cell to be connected to an external ion pump via a 7 cm long borosilicate tube. A photograph of the cell, prior to attaching the tube, is shown in Fig. 1(b). Cavities were etched into the walls of the Si frame to embed non-evaporable getters 2 , as illustrated in Fig. 1 (b). NEGs are inserted manually into the frame prior to anodic bonding and are held in place by thin 200 µm fingers to ensure mechanical stability. The active pumping vacuum system contains an electrically-heated alkali-metal dispenser that is used to provide the Rb vapor density. Rubidium atoms ( 85 Rb) are cooled inside the cell using up to 20 mW of total laser power, red-detuned from the D 2 cycling transition at 780 nm 45 . The beam diameter is 8 mm and repumping from the F = 2 ground state is accomplished by frequency modulating the cooling laser at 2.92 GHz to create an optical sideband at the appropriate detuning. The fluorescence from the MOT is collected using an imaging system with a numerical aperture of 0.4, and imaged onto a CCD. We reduced imaging to the region of interest to mitigate thermal vapor contribution to the MOT counts. A second fluorescence imaging arm connected to a photodiode enables MOT loading time measurements. NEGs are externally activated by heating with a 1 mm-diameter 975 nm laser beam. During activation of each NEG, the activation laser power was gradually increased until the short term pumping of an individual NEG reached a maximum. A photodiode detects a small amount of light from the activation laser and is used to time stamp the laser activation windows. The measurement sequence is shown in Fig. 2. An image I 1 from the MOT is acquired over a given exposure time (5 or 10 ms) in the presence of both cooling light and magnetic field gradient. The field gradient is then switched off, a background image I 2 is taken and a background-subtracted image I 3 = I 1 − I 2 is then generated. In this sequence, since the cooling laser is ON when the B-field is OFF, the number of counts of the image I 3 is actually proportional to Methods where N MOT , N mol and N hot are the number of atoms actually trapped in the MOT, the number of atoms slowed down by the optical molasses and the number of room-temperature atoms in the vapor, respectively. In our experimental conditions, we calculated using a simplified 1-D model 46 that N MOT /N mol 2. Taking this factor of 2 into account, the MOT atom number was estimated from the number of counts contained in the image I 3 using the formula reported in Ref. 47 . The residual background pressure in the cell was routinely extracted from measurements of the MOT atom number loading curve time constant using the photodiode fluorescence channel (PD4 on Fig. 1). As shown in Fig. 2, the MOT loading curve is acquired each time the B-field is turned ON (to turn on the MOT). Data points of the MOT atom number loading curve are then approximated by an exponential function of time constant τ MOT used to calculate the background pressure 48,49 . All background pressure data points shown in figures of this manuscript were obtained using this approach. We mention also that measurements of the background pressure through MOT loading curves were confirmed by measurements of the background pressure extracted from the ion pump current (in situations where the ion pump was activated). The alkali density probe at 795 nm is aligned through the cell, with the transmission actively monitored on a photodiode (PD3). The density probe is scanned over a GHz range to resolve absorption spectrum within the cell vapor. A lock-in amplifier is used to aid density extraction due to the small absorption path length in the MEMS cell. Results Prior to NEG activation, a MOT is initially established in the MEMS cell with an electrically driven alkali dispenser. During NEG activation the background pressure and Rb density increase slightly. Once the atom number again reaches a steady-state, the ion pump is then suddenly turned off and the evolution of the MOT atom number and background pressure are measured, while the Rb density is observed to be constant in the cell. Corresponding results are shown in Fig. 3(a) and (b). In this configuration, the MOT atom number N MOT decays rapidly to the detection noise-floor, measured here to be about 8 × 10 4 atoms, within 10 s. Experimental data of the MOT atom number N MOT are fitted by a single exponential decay function such that N MOT (t) = A × exp (−t/τ N ) + c, with a time constant τ N = 1.9 ± 0.2 s. Simultaneously, the background pressure exponentially increases with the time t, following the expected law P(t) = P f − ∆P exp (−t/τ P ), where P f is the final pressure, ∆P = P f − P i , P i is the initial pressure, and τ P = 4.2 ± 3.5 s is the time constant. Following this first test, the NEG is activated and the above-described experiment is repeated, 10 minutes after the end of the activation window, with the results shown in Fig. 3 (c) and (d). In this test, the MOT number decays significantly slower, remaining visible for times exceeding 10 minutes. Thus, with this single NEG activation, an improvement of about 100 was ii iii reported in the MOT observation time. In this test, contrary to the test performed before NEG activation, we found that the MOT atom number decay could not be fitted by a single time constant exponential decay function, likely due to the simultaneously evolving background pressure and alkali density within the cell during the initiation of passive pumping. In the present case, as shown in Fig. 3 (c), the MOT atom number decay is found to be well-fitted over 450 s by a dual-exponential function such as N MOT (t) = A 1 × exp(−t/τ N 1 ) + A 2 × exp(−t/τ N 2 ) + c, with time constants τ N 1 = 11 ± 1 s and τ N 2 = 109 ± 2 s dominating before and after the first 10 s respectively. This approximation is reported as a phenomenological model. Further studies are required to understand better and model the MOT atom number dynamics. Background pressure data reported in Fig. 3 (d) are again correctly fitted by the expected pressure-rise law, with a time constant τ P = 74 ± 9 s. This increased time constant of the vacuum pressure is directly related to the activation of the NEG pump. Following the initial demonstration in the MEMS cell, we performed a similar experiment in a standard borosilicate glass-blown cell with a length of 10 cm and a cross-sectional area of 1 cm 2 , containing 5 NEGs. The use of a glass-blown cell here permits insight to further NEG characterization, without implying the fabrication of a MEMS cell with additional NEG cavities. The NEGs were sequentially activated while the steady-state MOT atom number and background pressure, as established from the MOT loading curves, were tracked in the absence of active pumping. In these measurements, the ion pump was turned off 10 minutes after NEG activation and was turned back on again after each new NEG measurement, to let the system reach a new steady-state. Figure 4 shows an example of the MOT atom number decay (a) and background pressure (b) evolution, following the activation of the fourth NEG and subsequent extinction of the ion pump. In Fig. 4(a), we found that the decay of N MOT was reasonably fitted by a dual-exponential function, as described above, with τ N 1 = 10 ± 0.1 s and τ N 2 = 70 ± 1 s dominating before and after the first 40 s respectively. The background pressure data (b) are fitted by an exponential function, here with a time constant τ P = 73 ± 12 s. Figure 4(c) reports the measured value of the time constant τ P versus the number of activated NEGs in the glass-blown cell. We note that the performance of a single activated NEG is less efficient than the single NEG performance in the MEMS cell. This is likely due to the significantly reduced vacuum volume of the MEMS cell compared to the glass-blown cuvette. It is also observed that the pressure time constant τ P increases with each additional NEG activation, showing a summing contribution to the passive pumping within the cell environment. Following the activation of five NEGs, the time constant τ P is found to be improved by a factor of 30, in comparison to the initial test (before any NEG activation). After each passive pumping period with the NEGs, the ion pump was turned back on and the MOT was recovered. We found that with each subsequent NEG activation, the number of atoms in the MOT after ion pump turn-on also increased, as shown in Fig. 4 (d). This increase is roughly linear with the number of activated NEGs, showing that the NEGs contribute to the pumping dynamics in the cell in the presence of the ion pump. We noted also that the Rb pressure in the cell increased slightly after each subsequently activated NEG. This could be explained by Rb adsorption onto NEGs prior to activation. Following the evaluation of the short-term impact of passive pumping on the cell environment evolution, the mid to long-term evolution of the cell in the regime of purely passive pumping was investigated. Figure 5 shows the long-term evolution of the MOT atom number (a), the background pressure (b) and the 85 Rb pressure (c), in the borosilicate cell with 5 activated NEGs, after the ion pump is turned off (at t = 0). The MOT atom number, initially at the level of about 10 6 , is observed to decrease until 1000 s at a value of a few 10 4 . Following this initial decay, the atom number increases again, flattening around 10 4 s before decaying to 2 × 10 3 at 2 × 10 5 s. The resurgence of the MOT number is likely due to a simultaneous decrease of the background pressure and slight increase of the Rb density at 10 4 s. The Rb pressure increase can be explained by the fact that the electric dispenser was operating at a fixed current throughout this sequence, leading to a slow increase in alkali vapor pressure due to the lack of active pumping that would otherwise remove Rb from the vacuum. The gradual increase reaches a maximum at 10 5 s, where the Rb pressure decreases again until the MOT drops below the detection noise-floor. The reason for the background pressure fluctuation between 10 3 -10 4 s was further investigated. We found that the short-term τ P was increased by a factor 10 when a valve was used to remove the ion pump rather than turning it off. This indicates that turning off the ion pump may release contaminants into the vacuum that could take the time scale seen in Fig. 5 (b) for the NEG to remove them, resulting in the background pressure fluctuation that is observed. After the resurgence of the MOT atom number near 10 4 s, the MOT number decay is fitted by an exponential decay function, shown in Fig. 5 (d), with a time constant of 5.2 × 10 4 s. The MOT was still clearly visible after 3.5×10 5 s, i.e. more than 4 days. Using expressions reported in Ref. 40 , we calculated that He permeation through the borosilicate glass may contribute to the background gas increase at this stage of the experiment. We checked that actual variations of the Rb density, cell temperature, magnetic field gradient, total laser intensity or laser detuning, measured during the test, could not explain the MOT atom number dynamics on long integration times. Possible variations of the MOT beam alignment or the MOT beams power distribution (not measured in the experiment) could have contributed to slow variations of the MOT number seen at long observation times 46 . 5/9 i ii iii iv . The absence of points between 6 × 10 4 and 10 5 s in (b) is due to a software issue with the MOT loading time extraction. We note on (a) that at long time scales, the MOT exhibits a diffuse shape due to the high background pressure and alkali density regime. In addition, the MOT height likely changes a little over the 4-day measurement. These changes might result from slight mechanical, polarization or optical alignment changes. Although further work is required to demonstrate longer passive pumping times, this proof-of-principle measurement with the activation of five NEGs has yielded the demonstration of a MOT observation time that exceeds 5 orders of magnitude from the no-NEG scenario. In a last test, to demonstrate that a degradation of the NEGs pumping-rate was not a systematic limitation, the ion pump was re-activated to recover the MOT, before being shut-down again to evaluate the continued pumping performance of the NEGs. In this scenario, we found that 8 days after the NEGs activation, the values of the time constant τ P did not demonstrate any clear sign of degradation of the short-term pumping rate. This result is an additional source of encouragement for the future development of passively-pumped cold-atom MEMS cells. Conclusions We have reported the detection of a 6-beam magneto-optical trap in a MEMS cell and in a glass-blown cell, each embedding laser-activated passive non-evaporable getter (NEGs) pumps. In each cell, the evolution of the cell inner atmosphere was monitored after achievement of a steady-state MOT thoughout the NEG activation windows and passive pumping tests were later performed by turning off the external active ion pump. In the MEMS cell using ASG windows, a single NEG was successfully laser-activated, demonstrating 2 orders of magnitude improvement of the MOT observation time to 10 minutes. In the glass-blown borosilicate cuvette cell, activation of 5 NEGs yielded a MOT observation time greater than 4 days in the regime of purely passive-pumping, i.e. about five orders of magnitude longer than in the no-NEG scenario. These results open the way to the development of UHV MEMS cells devoted to be exploited in fully-miniaturized cold-atom sensors and instruments.
4,620
2020-08-03T00:00:00.000
[ "Physics", "Engineering" ]
Non-local matrix elements in $B_{(s)}\to \{K^{(*)},\phi\}\ell^+\ell^-$ We revisit the theoretical predictions and the parametrization of non-local matrix elements in rare $\bar{B}_{(s)}\to \lbrace \bar{K}^{(*)},\phi\rbrace\ell^+\ell^-$ and $\bar{B}_{(s)}\to \lbrace \bar{K}^{*}, \phi\rbrace \gamma$ decays. We improve upon the current state of these matrix elements in two ways. First, we recalculate the hadronic matrix elements needed at subleading power in the light-cone OPE using $B$-meson light-cone sum rules. Our analytical results supersede those in the literature. We discuss the origin of our improvements and provide numerical results for the processes under consideration. Second, we derive the first dispersive bound on the non-local matrix elements. It provides a parametric handle on the truncation error in extrapolations of the matrix elements to large timelike momentum transfer using the $z$ expansion. We illustrate the power of the dispersive bound at the hand of a simple phenomenological application. As a side result of our work, we also provide numerical results for the $B_s \to \phi$ form factors from $B$-meson light-cone sum rules. To this end, several intrinsically non-perturbative hadronic matrix elements must be calculated reliably. The main contributions to these amplitudes come from the semileptonic and electromagnetic dipole operators, and they are proportional to hadronic matrix elements of local quark currents. These matrix elements, which can be expressed in terms of "local" form factors, are very similar to the ones appearing in semileptonic decays mediated by charged currents. Beyond these terms, there are also contributions from four-quark and chromomagnetic dipole operators, which require the calculation of hadronic matrix elements of the T -product of these local operators and the electromagnetic current [18][19][20][21]. These matrix elements can be expressed in terms of "non-local" form factors and are considerably more complicated to compute than the local form factors. The decay amplitudes for any of these processes can be written as 1 where q 2 is the invariant squared mass of the lepton pair and L µ V (A) ≡ū (q 1 )γ µ (γ 5 )v (q 2 ) are leptonic currents. We have included explicitly the effects of the semileptonic operators O 7 , O 9 and O 10 but suppressed contributions from other local semileptonic and dipole operators that are not relevant in the SM, as well as from higher-order QED corrections. Nevertheless, the decomposition in Eq. (1.1) is exact in QCD. All non-perturbative effects are contained within the local and non-local hadronic matrix elements F B→M (T ),µ and H B→M µ , defined as where j em µ = q Q qq γ µ q with q = {u, d, s, c, b}. In Eq. (1.4) we retained only the terms containing the operators O 1 = (sγ µ P L T a c)(cγ µ P L T a b) and O 2 = (sγ µ P L c)(cγ µ P L b) , (1.5) which have large Wilson coefficients in the SM. The contribution of these terms is commonly called the "charm-loop effect". The contributions of all the other WET operators are suppressed by small Wilson coefficients and/or by subleading CKM matrix elements. In the literature, the non-local contributions to Eq. (1.4) are sometimes included through a shift of the Wilson coefficients C 7 and C 9 . The resulting effective Wilson coefficients C eff 7,9 (q 2 ) become both process-and q 2 -dependent [20]. In this work we prefer not to use C eff 7,9 (q 2 ) to keep the non-local contributions explicitly separated from the local ones. The local and non-local form factors F B→M λ,(T ) (q 2 ) and H B→M λ (q 2 ) are the invariant functions of a Lorentz decomposition of the matrix elements The structures S λ define our conventions for the form factors and are given in Appendix A. The non-local contributions H B→M µ are currently the main source of theoretical uncertainty in the predictions ofB → M + − observables. All theoretical calculations of H B→M µ rely on some form of Operator Product Expansion (OPE), which allows to expand the non-local Tproduct in terms of simpler operators. Depending on the q 2 value, the two relevant OPEs are: The OPE is carried out in terms of local operators of the forms(0)D α . . . D ω b(0) [23,24]. The matching conditions are known to next-toleading order in QCD [25][26][27][28][29][30] and the corresponding matrix elements are related to the local form factors. The calculation of the non-local contributions in the region below the open-charm threshold, that is for q 2 14 GeV 2 , then proceeds in three steps: 1. Calculation of the LCOPE matching coefficients up to the desired order (both in the QCD coupling and in the LCOPE power counting). This calculation must be carried out for q 2 values that ensure rapid convergence of the LCOPE, which is the case for q 2 −1GeV 2 . 2. Calculation of the hadronic matrix elements of the operators that emerge in the LCOPE using non-perturbative methods. 3. Analytic continuation of the LCOPE results from the point of calculation to q 2 values that correspond to semileptonic decays, i.e. for 4m 2 ≤ q 2 4M 2 D . The matching coefficients of the leading (local) operators of the LCOPE are the same as the ones in the local OPE. The next (subleading) order in the LCOPE involves lightcone operators with the insertion of a single soft gluon field. These contributions have been previously considered in Refs. [11,31] where the matching has been computed at leading order in α s . The matrix elements of the leading operators are related to the local form factors, which have been computed using both lattice QCD and light-cone sum rules (LCSRs) with uncertainties of 10% or less [22,[32][33][34][35][36]. The matrix element of the first subleading operator with a soft gluon field has been calculated in the framework of LCSRs with B-meson light-cone distribution amplitudes (B-LCDAs) [11]. The more formal ones involve dispersion relations [11,12] or analyticity [16]. Both approaches have the advantage of providing parametrisations that are consistent with QCD, and their parameters can be determined from both theory and data [16,42,43]. However, the rate of convergence of these analytic expansions is not well understood, which makes it difficult to assign a truncation error to the approach. The purpose of this paper is to provide improvements on each of the three steps described above. In Section 2.1, we review the calculation of the matching coefficients of the subleading operator in the LCOPE, giving a cleaner representation of the result. In Section 2.2, we recalculate LCSRs for the matrix elements of the non-local subleading operator, employing for the first time a complete set of B-LCDAs and updating the values of several crucial inputs. When putting together these results, we find that the subleading contributions are two orders of magnitude smaller than the previous calculation [11]. Our results significantly reduce the uncertainties on the non-local contributions. In Section 3, we improve on the parametrization of Ref. [16] and derive for the first time the dispersive bound on the non-local form factors H B→M λ (q 2 ). This bound allows us to constrain the possible effect of truncated terms in the analytic expansion. Our conclusions follow in Section 4. A series of appendices contains details on the definition of the hadronic matrix elements in Appendix A, the calculation of the matching coefficients to subleading order in the LCOPE in Appendix B, the outer functions needed for the dispersive bound in Appendix C, and our results for the B s → φ form factors in Appendix D. Subleading Contributions in the LCOPE We perform a LCOPE to calculate the time-ordered product in Eq. (1.4). This is achieved by expanding the charm-quark propagators near the light-cone, i.e. for x 2 0. The first two terms in this expansion are [44] The leading power in the LCOPE, which coincides with the leading power of the local OPE, reads 2 The matching coefficients ∆C 7 and ∆C 9 have been computed to next-to-leading order in QCD [25][26][27][28][29][30]. At the leading order, one finds with y(q 2 ) = 4m 2 c /q 2 > 1. The next-to-leading power term in the LCOPE was discussed for the first time in Ref. [11]. In Section 2.1, we review the calculation of its matching coefficient at leading order in α s . In Section 2.2, we recalculate the corresponding non-local matrix elements using LCSRs with B-LCDAs. Matching Condition for the Subleading Operator To obtain the next-to-leading power term in the LCOPE, one has to expand one of the two charm-propagators at next-to-leading power in x 2 . This expansion introduces a gluon field G στ (ux), as one can see from the second line of Eq. (2.1). Following Ref. [11], we use the translation operator to rewrite G στ (ux) as To make the calculation simpler it is convenient to introduce the light-like vectors n µ ± , which satisfy the following identities: where v µ is the four-velocity of the B meson. As anticipated in the previous section, to ensure the convergence of the LCOPE one has to impose that 4m 2 c − q 2 Λm b . Hence, we fix The quantities ∆C 7 and ∆C 9 are the same as in Ref. [30]. Now, we can decompose the covariant derivative in Eq. (2.5) in light-cone coordinates: As shown in Ref. [11], the contributions arising from the (n − · D) are further power-suppressed and hence can be neglected. See also Refs. [45,46]. We reproduce the form of the subleading operator as in Eq. (3.14) of Ref. [11]: This allows us to express the next-to-leading power in the LCOPE as Here α s ≡ α s (µ c ), with µ c a perturbative scale, and ω 2 is the n − light-cone component of the gluon momentum, which is related to the variable ω of Ref. [11] through ω = ω 2 /2. We also abbreviate C KMPW The matching coefficientĨ µρστ is a four-tensor that depends on the gluon momentum ω 2 n − and the momentum transfer q. We reproduce the expression for I µραβ ≡ − 1 2 ε αβ στĨ µρστ in Eq. (3.15) of Ref. [11] as well. Nevertheless, a few comments oñ I µρστ are in order: • To leading order in α s Eq. (2.10) only receives contributions from the charm electromagnetic current. Thus, we factored out the charm electric charge in the r.h.s. of Eq. (2.10), rendering the definition ofĨ µρστ consistent with Ref. [11]. • To leading order in α s , the matching coefficientĨ µρστ is finite in the limit ε → 0. Hence, it cannot depend explicitly on the renormalization scale µ to this order. Any residual µ dependence emerges only from the use of scale-dependent quantities; here, from the use of the charm quark mass m c . This behaviour is reflected in the fact that the matching coefficient of the LCOPE term at leading power already compensates the running of the semileptonic and electromagnetic dipole coefficients C 9 and C 7 to order α s . • The form in which I µραβ is presented in Ref. [11] is explicitly dependent on µ, in apparent contradiction to the previous comment. However, after integrating over u, this explicit scale dependence is removed. We therefore prefer to present the matching coefficient in such a way that the explicit scale dependence does not appear in the first place. Details on the manipulations to achieve this are presented in Appendix B. Thus, we write the subleading matching coefficient in the form Here, we approximate the square of the momentumq µ = q µ − v µ uω 2 /2 asq 2 q 2 − uω 2 m b and adopt the convention ε 0123 = +1. Even though Eq. (2.11) has a slightly different form with respect to the matching coefficient presented in Ref. [11], we emphasize that the two results are analytically equivalent. The form Eq. (2.11) has also been used to calculate the numerical results of Ref. [11] in a non-public Mathematica code 3 . Therefore, we fully confirm the analytic results of Ref. [11] for the tensor-valued matching coefficient due to soft-gluon interaction at subleading power in the LCOPE. Calculation of the Non-Local Hadronic Matrix Elements We proceed to calculate the hadronic matrix elements of the operator using LCSRs with B-LCDAs. To derive the sum rule, we start defining the correlator where j Γ ν ≡q 1 Γ ν s is the interpolating current, withq 1 = {ū,d,s} depending on the decay channel. The Dirac structure Γ ν of the interpolating current is chosen such that the respective M -to-vacuum matrix element does not vanish. We use (2.14) The light-cone sum rule calculation of the matrix elements ofÕ µ (q) is based on a LCOPE of the correlator Eq. (2.13) in the framework of heavy quark effective theory. Here, the momentum q is fixed according to the considerations in the previous section. For a fixed q, we now need to ensure that k 2 is chosen appropriately, i.e. that the expansion of Eq. (2.13) is dominated by bi-local operators with light-cone dominance. We find that for −k 2 ∼ a few Λ 2 the correlator is dominated by contributions at light-like distances y µ (y · n − ) n µ + 2 such that 3 We are very grateful to Yu-Ming Wang for sharing this code with us. y · q (y · n − )4m 2 c /m b [11]. Therefore, we can expand the strange quark propagator near the light-cone as well, keeping only the leading power term. We obtain where a, b are spinor indices. The non-local B-to-vacuum matrix elements can be expressed in terms of B-LCDAs. While in the sum rules of local form factors the leading contribution involves only two-particle B-LCDAs, the leading contribution in Eq. (2.15) comes from the three-particle B-LCDAs. The three-particle distribution amplitudes of the B-meson can be defined as [47,48] where we have suppressed the arguments of the B-LCDAs, e.g. ψ A ≡ ψ A (ω 1 , ω 2 ), for brevity. The derivatives, which act only on the hard-scattering kernel, are abbreviated as ∂ µ ≡ ∂/∂l µ , with l µ = (ω 1 + uω 2 )v µ . In addition, we introduced the shorthand notation where ψ 3p represents any of the three-particle B-LCDAs appearing in Eq. (2.16). In Ref. [11] only the B-LCDAs ψ A , ψ V , X A , and Y A have been taken into account. The distribution amplitudes of Eq. (2.16) have no definite twist, which is defined as the difference between the dimension and the spin of the corresponding operator. It is important to express the B-LCDAs in Eq. (2.16) in terms of B-LCDAs with definite twist, to ensure a consistent power counting. The relations between the twist basis and the basis of Eq. (2.16) are given in Ref. [48]. Inverting these relations, one obtains (using the same nomenclature as in Ref. [35]) where the subscripts 3, 4, 5, 6 indicate the twist of the respective B-LCDA. We include in our calculation contributions up to twist four with the models given in Section 5.1 of Ref. [48]. The models for B-LCDAs of twist five or higher are not currently known. However, they "are not expected to contribute to the leading power corrections O (1/M B ) in B decays" [48]. To proceed with the extraction of our sum rule, we need to obtain the hadronic dispersion relation of the correlator (2.13). This can be done by inserting a complete set of states with the appropriate flavour quantum numbers in between the interpolating current j Γ ν and the operatorÕ µ : where the sum runs over all the possible polarizations. The function ρ Γ µν (s, q 2 ) is the spectral density, which encodes the information about the continuum as well as the exited states, while s h denotes the continuum states threshold. The matrix elements of the interpolating current are expressed in terms of the decay constants f P for P = K and f V for V = K * , φ: The matrix elements of the operator are decomposed in terms of the scalar valued functions V B→M λ : where the definitions of the Lorentz structures S λ are given in Appendix A. The expressions for the non-local form factors H B→M λ then read where the ellipses stands for higher powers in the LCOPE and spectator scattering interactions. The definitions of the form factors F B→M λ are given in Appendix A as well. The last step in the calculation of the sum rule is to match the LCOPE result of Eq. (2.15) into the hadronic representation. To get rid of the contributions of the excited and continuum states of Eq. (2.18), we exploit the semi-global quark-hadron duality approximation. We also perform a Borel transform to reduce the impact of potential quark-hadron duality violations. In order to isolate the individual contributions of the functionsṼ B→M λ , we select a suitable Lorentz structure PṼ µν . Hence, we decompose the two-point functions in terms of scalar-valued functions ΠṼ (k 2 , q 2 ): The LCOPE results for the functions ΠṼ can always be written in the form where we have introduced the new variable σ = ω 1 /M B and defined For ease of comparison, we give our results in the basisÃ,Ṽ 1 ,Ṽ 2 ,Ṽ 3 as in Ref. [11] instead The relations between these two bases read is the Källén function. For convenience, we have also introduced the functionṼ We can now write down the sum rule for any of the quantitiesṼ =Ã,Ṽ 1 ,Ṽ 2 ,Ṽ 23 , which readsṼ with IṼ n ≡ JṼ n /NṼ . We abbreviate σ 0 ≡ σ(s 0 , q 2 ), s (σ, q 2 ) ≡ ds(σ, q 2 )/dσ, and the differential operator d dσ Here s 0 is the effective threshold s 0 of the sum rule, which differs in general from the continuum threshold s h . The functions IṼ n can be represented as integrals of the three-particle B-LCDAs consists of a universal part and a structure dependent part: The quantities PṼ µν , NṼ , and KṼ 2 are listed in Table 1, while the coefficients C (Ṽ ,ψ 3p ) n,r of Eq. (2.32) are provided in an ancillary Mathematica file. We do not expect to find full agreement between our results and those of Ref. [11] for the matching coefficients CṼ ,ψ 3p n,r . The reason is that our results are expressed in terms of the full set of three-particle B-LCDAs as discussed in Ref. [48], while the results of Ref. [11] use an incomplete set of Lorentz structures and B-LCDAs. For the calculation of local form factors, this issue is not numerically relevant, since the three-particle contributions are numerically small compared to the leading twist and even the next-to-leading twist two-particle contributions; see also the discussion in Ref. [35]. For this particular LCSR calculation, the two-particle contributions are absent and hence the three-particle contributions are numerically leading. Our main results can be summarized as follows: • Restricting our results to the same set of Lorentz structures and independent threeparticle B-LCDAs as in Ref. [11], we find full agreement with the results of that paper. • Using the full set of three-particle B-LCDAs, the thresholds setting procedure of Ref. [49] produces results that are compatible with the thresholds obtained for the local form factors in Ref. [35]. This is not the case when restricting our analytical expression to the subset of Lorentz structures as discussed in the previous point. • Our final results are one order of magnitude smaller than in Ref. [11], when using the same input parameters as in that paper. This difference becomes even larger when using up-to-date inputs, as explained in detail in the next subsection. We find that this reduction in size arises from cancellations across the Lorentz structures, since the "new" structures enter the coefficient functions with opposite signs. Consequently, the phenomenological impact of the soft-gluon contribution to the non-local matrix elements is significantly reduced in the region where the LCOPE is applicable. Numerical Results The values of the parameters used in our numerical analysis are collected in Table 2. For the B → K and B → K * transitions, these parameter coincides with the ones used in Ref. [35]. In particular, we employ the same effective thresholds s 0 given in that paper. This ensures consistency when using both local and non-local form factors in a simultaneous phenomenological analysis. For the B s → φ transition, we need additional parameters that are not discussed in Ref. [35]. We follow the procedure outlined in Ref. [54] to estimate the first inverse moment 1/λ Bs,+ of the leading twist B s -LCDA. This estimate agrees with the recent calculation carried out in Ref. [57]. The values of the parameters λ 2 Bs,E and λ 2 Bs,H , which enter in the models of B s -LCDAs, are assumed to be equal to λ 2 B,E and λ 2 B,H , respectively. Given the large uncertainties of these latter parameters, we expect potential SU (3)-flavour symmetrybreaking effects to be negligible. To determine the effective threshold for the LCSRs in the B s → φ transition, we first calculate the local B s → φ form factors using the analytical results of Ref. [35]. In fact, these results can be employed to predict the local form factors for any B → V transition. We can then set the effective threshold applying the procedure described in Ref. [49]. Our numerical predictions of the local B s → φ form factors are given in Appendix D. Note that this is the first calculation of these form factors using LCSRs with B-LCDAs. For all the transitions considered, we vary the scale of the charm quark mass in the MS scheme between m c itself and 2m c and the Borel parameter in the interval 0.75 GeV 2 < M 2 < 1.25 GeV 2 , as in Ref. [11]. We verified that in this Borel window the tail of the LCOPE result is much smaller than the LCOPE result integrated between 0 and σ 0 . Moreover, the sum rule dependence on M 2 is mild (< 6% in the Borel window considered here) and negligible compared to the parametric uncertainties in our calculation. We can now evaluate the sum rule (2.30) for B → K ( * ) and B s → φ transitions. The computer code needed to obtain our numerical results will be made publicly available under an open source license as part of the EOS software [58]. Our predictions forÃ,Ṽ 1 ,Ṽ 2 , andṼ 3 are shown in Table 3. For the B → K ( * ) transitions we also compare our results with Ref. [11], while the results for the B s → φ transition are calculated for the first time. One can easily observe that our results are roughly two orders of magnitude smaller than in Ref. [11]. As explained in the previous subsection, one order magnitude can be attributed to the different treatment of the three-particle B-LCDAs between the two papers. The remaining difference is due to the updated input parameters used in our numerical analysis, in particular to the values of λ 2 B,E and λ 2 B,H . These parameters enter as the normalization of the three-particle B-LCDAs, and have therefore a large impact on the overall size of the hadronic matrix elements. In Ref. [11] the approximation λ 2 B,E = λ 2 B,H is adopted, which is not justified by calculations of these parameters [22,55,59]: In Ref. [11] it is also assumed that λ 2 B,E = 3 2 λ 2 B,+ , based solely on the desire that the exponential model for the leading-twist B-LCDA satisfies exactly the Grozin-Neubert relations [59]. This assumption yields a central value for λ 2 B,E approximately 20 times bigger than the one found in its most recent calculation [55]. We do not use this rather strong assumption, and use the calculated values instead. We emphasize that although the relative uncertainties of our results listed in Table 3 are similar to the ones in Ref. [11], our absolute uncertainties are much smaller. 4 The uncertainties of the local form factors can be further reduced by using combined fits to lattice QCD and LCSR results [35,36]. In anticipation of the next section, we keep only the contributions proportional to the charm quark electric charge Q c in the above results. Their uncertainties are negligible compared to the ones of the local form factors. The findings of Ref. [11] imply that the next-to-leading power contribution in the LCOPE could be larger than the leading power contribution in the computation of the non-local form factors H B→M λ . This has been cause for concern about the rate of convergence of the LCOPE even at spacelike momentum transfer. One of the main findings of our work is that the contribution at next-to-leading power is in fact negligible compared to the theory uncertainties of the leading-power term. The authors of Ref. [11] come to a different conclusion, due to the missing terms in the calculation of the hadronic matrix element. As a consequence, theoretical predictions of the H B→M λ are dominated by the leading-power of the LCOPE, i.e. stem from the first two terms in Eq. (2.24). Therefore, our findings thoroughly eliminate the concern and give confidence that a precise theoretical prediction in the spacelike region is now possible. Dispersive Bound The results for the non-local form factors H B→M λ of Section 2 need to be analytically continued from the spacelike region of q 2 , where they are obtained, to the timelike region, where they are required for phenomenological studies. This requires a suitable parametrization of the hadronic matrix elements. Previous parametrizations based on series expansions involve an uncontrollable truncation error [11,13,15,16]. In the case of local form factors, this problem is solved by imposing dispersive bounds, which provide control over the systematic truncation errors by turning them into parametric errors. However, no such bound has been derived for non-local matrix elements as of yet. The purpose of this section is to derive the dispersive bound for the non-local matrix elements for the first time. To this end, we construct a parametrization of these matrix elements that manifestly satisfies a dispersion relation derived from the total cross section of e + e − → bsX. We obtain the dispersive bound by matching two representations of the discontinuity due to bs on-shell states of a suitable correlation function Π: Here, the operators O µ (q; x) and O †,ν (q; 0) are defined as 5 The discontinuity Disc bs Π satisfies a subtracted dispersion relation: Here n is the yet-to-be-determined number of subtractions and Q 2 is the subtraction point, chosen so that an OPE can be performed. We first isolate the contribution to Π that stems exclusively from the bs cut in Section 3.1, thereby determining Disc bs Π using a local OPE. We then derive the hadronic dispersion relation for χ in terms of the non-local hadronic matrix elements in Section 3.2. We then use our knowledge from the previous two sections to construct a parametrization of the non-local hadronic matrix elements that manifestly fulfils a dispersive bound on its parameters in Section 3.3. We finally present a practical application of the dispersive bound in Section 3.4. Calculation of the Discontinuity in a Local OPE We now calculate the contributions to the discontinuity of the correlation function (3.1) that exclusively arise from bs-flavoured intermediate states. To simplify this task, we use the lowrecoil OPE for the operators in Eq. (3.2) well within its region of applicability, that is for Here, d is the mass dimension of the local operators O µ d,n , while n labels the different operators with the same mass dimension. The Wilson coefficients of operators of dimension d scale as The first few operators in this expansion read [24] (3.5) The Wilson coefficients of the leading dimension-three operators are given by 2c (q 2 ) , 1c (q 2 ) + C 2 F 2c (q 2 ) , where we use the same definition of f LO and F (7,9) 1c,2c as in Ref. [30], thereby only retaining the contributions proportional to the charm quark electric charge Q c . Here and in this section we use α s ≡ α s (m b ). Already in the calculation of C 3,1 at leading order in α s one encounters UV divergences in dimensional regularization. Hence, the coefficients C 3,j are always understood to be renormalized [25]. A few comments are in order regarding the OPE: • The Wilson coefficients of the operators of dimensions 4 and 5 start at O (α s ); • operators of dimensions d = 3, 4 interfere with each other, but these interference terms arise only at order α s m s /m b ; • the operators at dimension d ≥ 5 do not interfere with the ones at mass dimension d = 3 or d = 4 to leading-order in α s . Based on the above, we adopt the power counting ε 2 ∼ Λ had /m b ∼ α 2 s . Thus, up to corrections of order ε 3 , we can express the discontinuity of Π as Disc bs Π OPE (s) = |C 3,1 (s)| 2 Disc bs Π 1,1 (s) + |C 3,2 (s)| 2 Disc bs Π 2,2 (s) where we have used the short hand notation It is instructive to investigate the perturbative expansion of Disc bs Π OPE by writing Π i,j (s) = (3.9) To LO we find the compact expressions Here λ kin ≡ λ(m 2 b , m 2 s , s). The NLO expressions have been calculated in the context of the gauge boson self-energies [60]. The one needed here is given by where Im Π + T (s) has been calculated in Ref. [60]. Inserting these results in Eq. (3.3) we find that at least two subtractions (n = 2) are needed in Eq. (3.3), and thus (3.14) For Hadronic Dispersion Relation By means of unitarity, the discontinuity of the correlation function Π can be expressed in terms of a sum of sesquilinear combinations of hadronic matrix elements: The one-body contributions to Disc bs Π involveB * s -to-vacuum matrix elements of the nonlocal operators. While we do not include these contributions here, they can be easily accounted for in future works. The two-body contributions to Disc bs Π arise from intermediateBK, BK * ,B s φ and further bs states that also include baryons, such as Λ bΛ . Their contributions to Disc bs Π can be expressed as follows: Here H b and Hs denote hadrons with flavour quantum numbers B = −1 and S = 1, respectively, and the two-body phase space measure is given by Since we work in the isospin limit, the contributions due toB 0 K ( * )0 and B − K ( * )+ are identical. Hence, we simply multiply theB 0 K ( * )0 = B − K ( * )+ ≡BK ( * ) contributions by a factor of 2. Keeping only the contributions due toBK ( * ) andB s φ, and using Eq. (A.10), we find 3 32iπ 3 Disc bs Π had (s) = The non-local matrix elements H B→M µ develop a series of branch cuts starting at q 2 = 4M 2 D , that is below the (M B + M M ) 2 threshold. Although they do not contribute to Disc bs Π, they still spoil the analyticity of the non-local form factors H B→M λ in the semileptonic region 0 ≤ q 2 ≤ (M B − M M ) 2 , which is the phenomenologically interesting one. This makes the derivation of the bound for non-local matrix elements considerably more complicated than the one for local matrix elements (see e.g. Ref. [62]). In particular, it implies that the coefficients of the Taylor expansion in the variable z of the non-local form factors do not fulfil a dispersive bound. In the next section, we show that appropriately chosen functions of z, which fulfil a non-trivial orthogonality relation on the integration domain, cure this problem. Derivation of the Bound We start by matching the OPE result onto the hadronic representations of χ(Q 2 ) -defined in Eq. (3.3) with n = 2 -by means of global quark-hadron duality: We then use Eq. (3.19) to rewrite Eq. (3.20) as a dispersive bound on weighted integrals of the hadronic matrix elements: Following the usual procedure to obtain dispersive bounds of the parameters of the hadronic matrix elements [63], we define the map Here s + is the lowest branch point of the matrix element and s 0 can be chosen freely in the open interval (−∞, s + ). In our case, we have s + = 4M 2 D rather than (M B + M K ( * ) ) 2 as in the case for the local B → K ( * ) form factors. Using this map and the fact that z = e iα on the unit circle, we obtain + further positive terms , (3.23) where the integral limits are given by with s XY defined previously below Eq. (3.18). The central improvement of this paper is the change of the parametrization discussed in Ref. [16] to one that fulfils a dispersive bound. As in that paper, we remove the dynamical singularities of the non-local form factors H B→M λ using the Blaschke factor (3.25) Here z ψ = z(s = M 2 ψ ) is the location of the two narrow charmonium poles in the complex z plane. These are the only poles on the open unit disk. In addition, to formulate the bound in a concise form and to avoid kinematical singularities, we introduce suitable outer functions φ B→M λ (z) [64]. These outer functions are defined such that on the integration domain their modulus squared coincides with Eqs. (C.1)-(C.3) and they are free of unphysical singularities inside the unit disk. The precise form of these functions and their derivation is provided in Appendix C. We can then define the functionŝ 27) which are analytical on the open unit disk. At this point, we can express the dispersive bound as The next step is to find a basis of orthogonal functions on an arc of the unit circle covering angles −α XY to +α XY . In lieu of a closed formula, we construct the first three orthonormal polynomials p X→Y n as p X→Y . The higher order polynomials can be determined using an orthogonalization procedure. For an in-depth review of the mathematical properties of these orthogonal polynomials on the unit circle, we refer to Ref. [65]. The practical considerations of this application of the orthogonal polynomials are discussed in Ref. [66]. Using the orthogonal polynomials, we can now expand The dispersive bound then takes the simple form In that case, the z monomials constitute a complete and orthonormal basis of polynomials on the integration domain, which is the unit circle in the z plane. We have seen that the integration domain for the non-local form factor case is only an arc of the unit circle, due to the appearance of DD and similar branch cuts below theBM thresholds. As a consequence, the orthonormal polynomials in this integration domain are the ones given in Eq. (3.29). While these polynomials are clearly much more complicated than the z monomials, they allow us to write the dispersive bound in the diagonal form shown in (3.31). An inconvenient feature of the p B→M n polynomials is that their magnitude increases for n → ∞ in the semileptonic region. Nevertheless, since the series in Eq. (3.30) is convergent for |z| < 1 due to the analyticity ofĤ B→M λ , this only implies that the coefficients a B→M λ,n must fall off sufficiently fast such that higher order terms in the series are suppressed. In the next subsection we present a simple application of the bound in Eq. (3.31) to H B→K 0 . We remark that it is also possible to expand theĤ B→M λ functions in terms of z monomials using the same Blaschke factors and outer functions give here. However, the coefficients of that expansion do not satisfy any dispersive bound. Application toB →K + − We now explore some of the implications of the dispersive bound (3.31). Considering only thē B →K + − contribution to the bound, we havê where we have assumed that the series expansion is truncated at n = N . Depending on the value of N , the bound sets a global constraint on the size of |Ĥ B→K 0 |. This is shown in the left panel of Figure 1, where the constraints for N = 0, 1, 2 are shown in yellow, green and blue, respectively. In order to express the result as a function of q 2 , we have taken s 0 = 0. Of course, one may include the theoretical calculation based on the LCOPE at negative q 2 . In order to see how this information impacts the constraints on the size of |Ĥ B→K 0 |, we impose that H B→K 0 takes the central values quoted in Table 4 at q 2 = −1 GeV 2 and q 2 = −5 GeV 2 . At this point we need to make a choice on the value of the subtraction point Q 2 . Here we take Q 2 = −m 2 b in the outer functions. In the case N = 2, these two theory constraints fix two independent (complex-valued) combinations of a B→K 0,1,2 , leaving one complex free parameter. This free parameter is then constrained by the dispersive bound, and leads to the black region shown in the left panel of Figure 1. This black region could be regarded as an estimate of the truncation error when using two theory data points at negative q 2 to fix the N = 1 series expansion ofĤ B→K 0 (q 2 ). In relative terms, one may consider the ratio of the resulting allowed values forĤ B→K 0 (q 2 ) with N = 2 to the theoretical curve for the N = 1 expansion fixed by the two theory data points. This is shown by the black region in Figure 2. according to the dispersive bound on the expansion with one (orange), two (green) and three (blue) coefficients. The black region shows the allowed region for the three-coefficient expansion including two theory constraints at q 2 = −1 GeV and −5 GeV. Right: The same, assuming allB →K ( * ) + − andB s → φ + − modes contribute equally to the bound. In this case the region that includes the theory constraints is shown in red. Figure 1). This example does not take full power of the dispersive bound, in particular by neglecting the fact that other modes (e.g.B →K * + − andB s → φ + − ) also contribute to the bound. An analysis that takes this into account will lead to simultaneously correlated bounds for all H B→K 0 (q 2 ), H B→K * λ (q 2 ) and H Bs→φ λ (q 2 ), and thus it is beyond the scope of this section. In order to estimate what the impact of adding these other modes might be in a simple setting, we can make the reasonable simplifying assumption that all eleven modes in Eq. (3.31) contribute equally to the bound, and thus take N n=0 a B→K n 2 < 1 11 (involves assumption) . (3.33) The corresponding bounds are shown in the right panel of Figure 1, and the red region in Figure 2. In this case the constraint is tightened by more than a factor of two. To finish, it is interesting to see what these bounds look like at the amplitude level in comparison to the contribution from C 9 . To that end, we consider the quantity , (3.34) which is defined in such a way that The corresponding limits on the magnitude of ∆C B→K 9,had (q 2 ) in the same circumstances as those in Figure 2 are shown Figure 3 (left panel). One can see that the non-local contribution cannot exceed (in magnitude) the SM value for C 9 (m b ) 4 for q 2 3 GeV 2 , or even for q 2 5 GeV 2 in the simplified scenario where the K * and φ modes are included. The raise of the bound for q 2 → 9 GeV 2 is due to the fact that ∆C B→K 9,had (q 2 ) contains a pole at q 2 = M 2 J/ψ . In this sense it may be convenient to consider instead the combination P(z) × ∆C B→K 9,had (q 2 ) where the narrow charmonium poles have been removed. This is shown in the right panel of Figure 3. This quantity is also directly related with the B → Kψ n amplitudes, at q 2 = M 2 ψn , and thus the experimental measurement of these amplitudes can be used to constrain further the non-local form factor [11,12,16]. In particular, with the conventions of Ref. [16], . These experimental data points are shown in the right-hand plot of Figure 3, and are situated well within the dispersive bounds, as required. Including this experimental information in the determination of H B→K 0 (q 2 ) is rather important [11,12,16], since it essentially turns the extrapolation from the spacelike to the timelike region into an interpolation, up to undetermined strong phases that are not fixed by the non-leptonic amplitudes. The result of adding these two charmonium data points on the allowed ranges for |∆C B→K 9,had (q 2 )| is shown by the dashed region in the left-hand plot of Figure 3. In this case, only the theory data point at q 2 = −1 GeV has been kept, since otherwise the system would be overconstrained. Again, the resulting region is situated well within the dispersive bound. A more detailed and complete phenomenological study of the implications of the dispersive bound and the determination of the non-local contributions toB → M amplitudes is left for future work. Summary and Conclusions The contributions from four-quark effective operators are an essential part of the exclusivē B (s) → {K ( * ) , φ} + − andB (s) → {K * , φ}γ amplitudes. These contributions must be under reasonable theoretical control in order to derive solid conclusions from the measurements of such decay observables. However, they enter the decay amplitudes through a non-local matrix element of non-perturbative nature, which is very difficult to calculate with controlled uncertainties. In this work we have revisited this non-local effect and made progress at two fronts. First, we have recalculated the main subleading effect beyond the local OPE contribution, which arises from a soft gluon coupling to a quark loop. This recalculation involves a lightcone sum rule with B-meson light-cone distribution amplitudes (B-LCDAs), and improves upon the only previous calculation of this quantity by including the full set of B-LCDAs up to and including twist-four. Our reanalysis leads to a result for this soft-gluon effect that is two orders of magnitude smaller than the previous calculation. A substantial part of the difference is due to cancellations arising from the inclusion of the B-LCDAs that were missing. The remaining difference is due to updated inputs. Second, we have revisited the analytic continuation of the non-local effect from the LCOPE region (where it is calculated) to the physical region relevant for B decays. In particular, we have proposed a modified analytic parametrization of the non-local matrix element and derived a dispersive bound that constrains this parametrization. The combination of our new parametrization with the dispersive bound allows for the first time to control the inevitable systematic truncation error from which every existing parametrization of the matrix elements suffers. Our results lead to a better understanding of the non-local contributions to decay modes such asB →K ( * ) + − andB s → φ + − , which are currently under intense experimental and theoretical scrutiny. An important question is whether the anomalies observed in these modes are due to physics beyond the SM. Our ability to answer this question hinges on our ability to bound poorly known QCD effects that could potentially be responsible for the discrepancies. We can now say that the first subleading correction to the hadronic non-local contribution is very small in the decays considered here, giving support to theory calculations that neglect this subleading effect. In addition, our dispersive bound sets a solid ground for the analytic continuation of calculations from the LCOPE region to the physical one. Future phenomenological applications of the results presented here will lead to more accurate global analyses of b → s + − data. Acknowledgements We are very grateful to Alexander Khodjamirian, Sebastian Jäger, Roman Zwicky and Yu-Ming Wang for helpful discussions. We also would like to acknowledge helpful discussion with Christoph Bobeth, who participated in the first steps of this project. The A Definitions of the Hadronic Matrix Elements Following Ref. [16], we use a common set of Lorentz structure S λ to decompose both the local and the non-local hadronic matrix elements emerging in B → P (seudoscalar) and B → V (ector) transitions. In this work, we restrict ourselves to the B → K, B → K * , and B s → φ transitions, but the considerations of this appendix also apply to, e.g., the B s → K and B s → K * transitions. A.1 B → P Transitions For B → P transitions there are two independent Lorentz structures S λ µ in the decomposition of the matrix elements. Here µ is a Lorentz index and λ = 0, t denotes either longitudinal or timelike polarization of the underlying current. The structures read Using these structures, we decompose the B → P matrix elements into form factors as follows: Here and throughout this article we suppress the argument of the local and non-local form factors, which are functions of the momentum transfer squared: F B→M λ ≡ F B→M λ (q 2 ) and H B→M λ ≡ H B→M λ (q 2 ). The relations between our local form factor basis and the traditional basis of form factors (see, e.g., Refs. [35,56]) read The non-local form factor H B→P 0 defined in Eq. (A.4) is related to the non-local form factor H B→P defined in Ref. [11] through A.2 B → V Transitions For B → V transitions there are four independent Lorentz structures S λ αµ in the decomposition of the matrix elements. Here α and µ are Lorentz indices and λ =⊥, , 0, t denotes the different polarization of the underlying current. The structures read where λ kin ≡ λ(M 2 B , M 2 V , q 2 ) is the Källén function. We decompose the local and non-local B → V matrix elements as The minus signs in front of F B→V and F B→V 0 have been introduced to ensure that the form factors are all positive in the semileptonic phase space; the signs in front of H B→V and H B→V 0 then follow. The relations between our local form factor basis and the traditional basis of form factors (see, e.g., Refs. [35,56]) read The non-local form factor H B→V 0 defined in Eq. (A.10) is related to the non-local form factor H i defined in Ref. [11] through (A.12) B Matching Coefficient at Subleading Power The matching coefficient for the next-to-leading power of the LCOPE of correlator (2.2) was computed for the first time in Ref. [11]. The result was written in the form I µραβ (q, ω 2 ) = 1 8π 2 1 0 du ūq µqα g ρβ + uq ρqα g µβ −ūq 2 g µα g ρβ dI(q 2 ) dq 2 −ū − u 2 g µα g ρβ I(q 2 ) , (B.1) where I(q 2 ) = 1 0 dt ln µ 2 m 2 c − t(1 − t)q 2 , It is convenient to rewrite the function I(q 2 ) as Since the part of I µραβ proportional to I(q 2 ) is multiplied by (ū − u), the term ln µ 2 m 2 c vanishes after integrating over u. In addition, using the identity In this work we adopt the convention ε 0123 = +1. Here s 0 is a parameter of the s → z mapping (3.22). The values of the parameters a, b, c, d of Eq. (C.5) for the outer functions considered in this work are listed in Table 5.
11,498.2
2020-11-19T00:00:00.000
[ "Physics" ]
Clustering Nodes and Discretizing Movement to Increase the Effectiveness of HEFA for a CVRP A Capacitated Vehicle Routing Problem (CVRP) is an important problem in transportation and industry. It is challenging to be solved using some optimization algorithms. Unfortunately, it is not easy to achieve a global optimum solution. Hence, many researchers use a combination of two or more optimization algorithms, which based on swarm intelligence methods, to overcome the drawbacks of the single algorithm. In this research, a CVRP optimization model, which contains two main processes of clustering and optimization, based on a discrete hybrid evolutionary firefly algorithm (DHEFA), is proposed. Some evaluations on three CVRP cases show that DHEFA produces an averaged effectiveness of 91.74%, which is much more effective than the original FA that gives mean effectiveness of 87.95%. This result shows that clustering nodes into several clusters effectively reduces the problem space, and the DHEFA quickly searches the optimum solution in those partial spaces. Keywords—Swarm intelligence; capacitated vehicle routing problem; firefly algorithm; differential evolution; hybrid evolutionary firefly algorithm An optimization algorithm determines such a great solution in solving VRP that finding the global optimum solution takes a long time. Not only the deterministic optimization algorithms but also the probabilistic ones have some specific problems. The deterministic algorithms guarantee to give a global optimum solution, but their processes take a long time. In contrast, the probabilistic algorithms are commonly fast, but they do not always produce a global optimum solution. In practice, probabilistic algorithms are preferable in terms of fast processing time. In practice, HEFA has been proven to produce high performances for many optimization problems [28]. Hence, in this paper, a discrete version of HEFA, which is called as DHEFA, is exploited to develop a CVRP optimization model. A new idea of HEFA-based clustering is also proposed to make the model more effective in searching the minimum-cost route. Next, the fundamental theory of CVRP and HEFA will be clearly described in Section II. The proposed models of HEFA-based clustering and DHEFA-based optimization are then explained more detail in Section III. After that, Section IV discusses the simulation results. Section V eventually provides conclusion and the further plan. II. FUNDAMENTAL THEORY VRP is a combinatorial optimization problem, which is an extension of a Traveling Salesman Problem (TSP) [3]. It has a basic form called Capacitated VRP (CVRP) [7]. Unlike VRP, the CVRP has an additional problem when searching for optimum vehicle order schedules. Each node visited has a load that should be accommodated, and each vehicle has a maximum capacity that cannot be violated. This not only makes the optimum solution depend on the results of vehicle scheduling but also considers the burden that each vehicle can accommodate. The total distance on the scheduling is formulated as where x tot is the total distance, k is the number of vehicles, c i is number of nodes contained in the ith vehicle, and x j is the route traversed by the jth vehicle. A. Firefly Algorithm FA is inspired by the movements of fireflies looking for a partner, which based on two things: the attraction between fireflies and the intensity of light. The light intensity is basically the value of a function. Unfortunately, the light intensity is not the same in every place. Therefore, in [20] the formula of light intensity in one firefly against the others can be formulated as where I 0 is the value of fitness, γ is the value of light absorption, and r is the distance between the chasing individual and the individual being pursued in a scalar value. Just like the light intensity, the attraction is dynamic since the distance determines its change. The further the distance between the fireflies, the smaller the interest. Hence, the attractiveness function is formulated as where β 0 is the initial attractiveness value between two individuals and it is generally set to 1. In the original version for continuous-problem optimization, the distance is calculated using an Euclidean distance as where both x and y are n-dimensional vectors. Meanwhile, the firefly movement is calculated using the formula where β is the attractiveness value and α is a random value from 0 to 1. B. Differential Evolution (DE) DE is one of the Evolutionary Algorithms (EAs) [28], where the key processes of this algorithm are mutation, crossover, and selection. The mutation process in DE uses velocity vectors from two random vectors. This velocity vector then becomes the driving force for new vectors, which are not the two previous vectors. The DE mutation formula is represented as where v i(t+1) is a vector of mutations, x i(t) is an old vector, and F (x k(t) − x j(t) ) is a random vector difference from other individuals. Meanwhile, the crossover scheme in HEFA is simply represented as [28] where u i(t+1) is the vector of the crossover result and v i(t) is the result of the exchange of elements between the vector x 1(t) and the vector v i(t+1) with c r is a cross-over rate or a constant value when the element must be crossed-over. The selection process is then formulated as where this process only selects between the old vector x i and the result of the crossover vector u i(t+1) based on its fitness value. C. Hybrid Evolutionary Firefly Algorithm When building a good program of collective intelligence, the balance between both exploitation and exploration plays an important role. High exploitation makes the program converges too quickly, which is known as a premature convergence, and consequently, the program fails to find the best solution (global optimum ). In contrast, too high exploration affects the program does not converge to a global optimum. The program tends to behave like a random search. In FA, the process of balancing exploitation and exploration is more focused on regulating the values of γ and α. The α is responsible to the exploration process in the FA, which is usually a little value. The small α keeps the FA from behaving like a random search. But, at the same time, the exploration area became smaller, as illustrated in Fig. 1. A small radius α limits the movement of FA exploration. Each firefly drawn by a dark blue circle cannot explore areas outside its population. In cases where the solution space is greater than the radius of the distribution of fireflies, some areas within the solution space cannot be traced. Nevertheless, this exploration problem can be solved using a DE. Fig. 2 shows that the DE behavior that moves based on other vectors makes DE has a significant exploration radius. With a broad reach, DE can explore even outside the population area. This feature makes it one of the reasons why DE can complement the FA. HEFA is a combination algorithm between FA and DE. This algorithm is introduced by Afnizanfaizal Abdullah in [28]. The process of moving the algorithm is quite simple. HEFA only divides the firefly population into two parts based on their fitness values. Half of the population with high fitness values exploits the FA while the rests with poor fitness scores explore using the DE scheme. The experimental results in [28] prove that HEFA is excellent at solving complex problems and nonlinear biological models. The proposed DHEFA-based CVRP optimization model is illustrated in Fig. 3. It receives a dataset of nodes. First, the dataset is clustered using a HEFA. The produced optimum centroids are then exploited to initialize a population of fireflies in a DHEFA, where an individual of firefly represents a candidate solution of a route. Finally, the DHEFA searches a minimum-cost route as the best solution. The most challenging step in this optimization problem is determining the division of the number of nodes against the available vehicles. This division can be done in a purely random way, selecting nodes in sequence until reaching maximum vehicle capacity, and so on. However, clustering the nodes into n cluster, which is the same as the number of available vehicles, is the best solution since clustering can reach the minimum total distance traveled by each vehicle. A. Dataset of Nodes The dataset used in this research is the Augerat et al. Set B. It has three instances: B-n50-k8 that contains 50 nodes with eight vehicles, B-n66-k9 that consists of 66 nodes with nine vehicles, and B-n78-k10 that contains 78 nodes with ten vehicles. All instances do not provide a cluster of nodes to the vehicle, which is important since it affects the total distance traveled by a vehicle. Therefore, a clustering procedure is needed to develop the optimization model. B. HEFA-based Clustering A HEFA-based clustering is exploited here since it has been proven to give a high performance. It is expected to produce as high possible as density cluster for each vehicle since the denser the cluster, the lower the total distance for the vehicle. Firefly at the beginning of an iteration contains a random vector with a size of two times the total vehicles. A pair of two vector elements in a firefly represents the centroid coordinates in the form (x, y). All coordinates of centroids are then used to produce a fitness value obtained from the objective function. Half of the firefly population will move to pursue the best fitness value from its perspective while the rest move as if randomly in search of better fitness value. Once all fireflies move, they renew their respective fitness values. It is repeated until the stop condition is reached, and the HEFA produces the best firefly with the highest fitness value. An example of HEFA-based clustering a set of sixty nodes into three clusters is illustrated in Fig. 4. The coordinates of all centroids produced by the best firefly are then used to determine the cluster of nodes. Each node in a cluster is visited by a particular vehicle. The objective function is simply designed here using a sum of square Euclidean distance (SSE). This function calculates the total distance of all nodes to their respective centroids, which is formulated as (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 4, 2020 where k is the number of cluster, x i is the ith node, and c j is the jth cluster. Once the optimum clusters are generated, check if there is a vehicle carrying a load that exceeds the maximum capacity. Any node in an over-loaded vehicle is then redistributed to the nearest under-loaded vehicle, as illustrated in Fig. 5. The vehicle capacity Cap in cluster c 2 , which exceeds the maximum capacity MaxCap, looks for the closest vehicle to redistribute one or more nodes. A node, which is the closest to the cluster c 1 , is selected to move to the cluster c 1 . C. DHEFA-based Optimization Finally, a minimum-cost route is searched using DHEFA. Since the problem of determining the route is a discrete problem (sequence of nodes should be visited), the HEFA has to be redesigned into a discrete model. In [29], a discrete firefly algorithm (DFA) is proposed with a high performance. In this paper, the discrete model of FA is designed by following the concept of DFA. At the beginning of the iteration, a firefly in DHEFA consists of a random vector with a total size of nodes and elements in the range of one to a total non-repeating node. This vector is divided into the total number of vehicles where the nodes contained in each vehicle are following the clusters resulted from the previous HEFA-based clustering and the redistribution procedure. An example of firefly representation is illustrated in Fig. 6. HEFA uses a distance that is determined by the difference between two firefly vectors while DHEFA calculates the distance as the number of different elements between two fireflies (also known as the Hamming distance [30]), as illustrated in Fig. 7. Another difference is the movement of fireflies. This movement does not use the sum of the ith firefly vector with the distance to the followed firefly as in Equation 5, but instead uses an insertion function. This function takes a random node and swaps it with another random node [29], as illustrated in Fig. 8. In this CVRP case, the insertion is limitedly performed just for two nodes in the same cluster since the vectors in fireflies are divided by the number of vehicles. It cannot exchange two elements in two different vehicles. Therefore, when choosing a random element in k i , the second random element must be in k i . This exchange is carried out as much as the Hamming distance × γ. Just like HEFA, the movement of a firefly in DHEFA also depends on its fitness value that is calculated using Equation (5). Half of the firefly population chases the best fireflies from its perspective while the rest move randomly, expecting to get better fitness values. All fireflies then update their fitness values to be compared in the next iteration. When the stopping condition is reached, the best fireflies are chosen as the minimum-cost solution, as illustrated in Fig. 9. IV. RESULTS AND DISCUSSION In this research, the proposed DHEFA-based model is evaluated and compared with the original FA-based model using three cases of CVRP. The experiments are run five times to give a more accurate statistical result. In each case, an effectiveness metric is used here to measure how close the obtained optimum-cost route to the real global optimum-cost route from the dataset. In this evaluation, both FA and DHEFA have the same conditions of parameters: γ = 0.95, α = 0.2, and c r = 0.5. The results are listed in Table I. In all cases, DHEFA produces higher effectiveness than the original FA. In the CVRP case of B-n50-k8, with 50 nodes and eight vehicles, DHEFA produces an averaged effectiveness up to 94.25% while the original FA just gives 90.23%. In the CVRP case of B-n66-k9, which contains 66 nodes and nine vehicles, DHEFA also reaches a higher averaged effectiveness of 93.84%, but the FA just obtains 92.27%. Meanwhile, in the CVRP case of B-n78-k10, with 78 nodes and ten vehicles available, DHEFA gets much higher averaged effectiveness of 87.13% while the original FA yields 81.36% only. Thus, for the three cases, DHEFA reaches much higher averaged effectiveness of 91.74% than the original FA that just obtains 87.95%. This effectiveness of DHEFA is highly supported by the procedure of clustering nodes. Dividing nodes into some clusters is capable of reducing the problem space in some areas so that the optimization can be partially applied. This concludes that the research objective stated in Section I has been reached. V. CONCLUSION The proposed model of DHEFA-based CVRP optimization is capable of reaching the averaged effectiveness of 91.74%. This result is better than the original FA that gives mean effectiveness of 87.95%. This fact shows that the proposed clustering significantly increases the effectiveness of DHEFA. It can be simply explained that clustering nodes into some clusters is capable of reducing the problem space in some areas so that the optimization can be partially applied. In the future, an advanced procedure of redistribution can be introduced to ensure all vehicles have fair loads as well as do not violate the maximum capacity.
3,553.2
2020-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Molecular structure and the twist-bend nematic phase: the role of spacer length in liquid crystal dimers ABSTRACT The liquid crystal dimers, the 1-(4-substitutedazobenzene-4′-yloxy)-4-(4-cyanobiphenyl-4′-yl)butanes (CB4OABX), are reported in which the terminal substituent is either a methyl, methoxy, butyl, butyloxy, cyano or nitro group. The butyloxy spacer endows these dimers with the required molecular curvature to exhibit the twist-bend nematic phase in addition to showing the conventional nematic phase. Their transitional properties are compared to those of the corresponding dimers with either a pentyloxy or hexyloxy spacer. As expected, the even-membered pentyloxy-based dimers show the highest nematic–isotropic transition temperature, TNI, and exhibit smectic behaviour. These observations are attributed to their linear molecular shapes. The values of both the twist-bend nematic–nematic transition temperature, TN TB N, and TNI increase on passing from the butyloxy to hexyloxy spacer, but the change in TNI is greater than that in TN TB N. Thus, the ratio TN TB N is greater for the shorter spacer reinforcing the view that molecular curvature drives the formation of the NTB phase relative to the N phase. By comparison, the melting point decreases on passing from the butyloxy to hexyloxy spacer. Thus, increasing molecular curvature simultaneously increases both the melting point and NTB phase stability and this highlights the design challenge in obtaining dimers that exhibit enantiotropic NTB–I transitions. GRAPHICAL ABSTRACT Introduction The twist-bend nematic phase, N TB , is a fascinating liquid crystal phase in which achiral molecules spontaneously assemble into chiral arrangements and was the first example of spontaneous chiral symmetry breaking in a fluid with no spatial ordering [1][2][3][4]. The N TB phase was predicted using symmetry arguments by Dozov [5] and at the root of this prediction is the assertion that bent molecules have a strong tendency to pack into bent structures. Pure uniform bend, however, cannot fill space and must be accompanied by other local deformations of the director and specifically, either twist or splay. In the case of twist, this gives rise to the N TB phase in which the director forms a heliconical structure and is tilted with respect to the helical axis. The symmetry breaking is spontaneous such that left-and right-handed helices are degenerate and, hence, form in equal amounts. A key feature of the N TB phase is a surprisingly short pitch length, typically corresponding to just a few molecular lengths. This degeneracy is removed by molecular chirality, and the chiral twist-bend nematic phase is observed [6,7]. The N TB phase is normally formed on cooling a conventional nematic phase and only rarely is a direct N TB -isotropic phase transition observed [8][9][10][11][12][13]. The N TB phase is not only of very significant fundamental interest but also has considerable application potential [14][15][16][17][18][19]. Dozov also predicted the existence of twistbend smectic phases and these have recently been found experimentally [20][21][22][23][24]. In the vast majority of twist-bend nematogens, the required molecular curvature for the observation of the N TB phase is realised using odd-membered liquid crystal dimers [58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74]. In a liquid crystal dimer, two mesogenic groups are linked through a flexible spacer, and if an odd number of atoms connects the two groups, then the molecule is, on average, bent [75,76]. The bend angle in such molecules depends on a number of factors including the nature of the links between the spacer and mesogenic units. The length of the spacer also governs the molecular curvature and as the spacer is increased in length, there is an increased number of conformations available to it and the liquid crystal field preferentially selects the more linear of these [75]. This is apparent in the dependence of the N-I transition temperature, T NI , on spacer length for a homologous series of liquid crystal dimers. For short chain lengths, a pronounced alternation in T NI is observed in which even members exhibit the higher values, but this attenuates on increasing spacer length [75]. Whereas an odd member is bent, in an even-membered dimer the two mesogenic units are essentially parallel and the molecule is linear. The linear even-members are more compatible with the nematic environment, and hence higher values of T NI are observed. As the spacer length is increased, the difference in average shape between odd and even members decreases and the values of T NI become similar. In designing twist-bend nematogens based on oddmembered liquid crystal dimers, it would appear, therefore, that a short spacer would be the preferred choice in order to obtain a more pronounced molecular curvature. In practice, however, this has not been the case, and the overwhelming majority of twist-bend nematogens have spacer lengths of seven, nine or eleven atoms, and it is much less common to find examples containing three or five atoms (see recent review [73]). The reasons for this are straightforward. In any given homologous series of dimers in which the length of the spacer is varied, the short odd-members tend to have high melting points and low liquid crystal transition temperatures, typically showing monotropic phase behaviour [77,78]. On increasing the spacer length, the melting points of the odd members tend to fall and the liquid crystal transition temperatures increase sharply before passing through a maximum and tending towards a limiting value [75,76]. In order to obtain a better understanding of the role of short odd-membered spacers in the formation of the N TB phase, here we report a set of dimers based on an odd-membered spacer containing five atoms, namely a butyloxy spacer, the 1-(4-substitutedazobenzene-4′yloxy)-4-(4-cyanobiphenyl-4′-yl)butanes, see Figure 1. We refer to these dimers using the acronym CB4OABX in which CB denotes cyanobiphenyl, 4O the butyloxy spacer, AB azobenzene and X the terminal group. In order to establish the role played by molecular curvature, we also report the properties of the corresponding linear even-membered CB5OABX dimers based on a pentyloxy spacer and compare their properties with those of the longer odd-membered CB6OABX dimers reported previously [79,80]. Synthesis The synthetic route used to obtain the CBnOABX dimers is shown in Scheme 1 and is based upon that described for the preparation of the CB6OABX dimers [79]. We note, however, that we have recently reported a convenient, one-pot synthetic method to obtain the ωbromo-1-(4-cyanobiphenyl-4′-yl)alkanes [81] that would significantly simplify the preparation of the CBnOABX dimers. The preparation of the 4-hydroxy-4′-substitutedazobenzenes has been described in detail elsewhere [82][83][84][85]. The synthesis and characterisation of the CB4OABX and CB5OABX dimers and their intermediates are fully described in the ESI. Thermal characterisation The thermal behaviour of the materials was studied using differential scanning calorimetry (DSC) using a Mettler Toledo DSC1 equipped with a TSO 801RO sample robot and calibrated using indium and zinc standards. Heating and cooling rates were 10°C min −1 , with a 3-minute isotherm between heating and cooling segments. Thermal data were extracted from the second heating trace unless otherwise stated. Samples were run in duplicate, and an average of the two measurements of temperature and change in entropy is reported. Phase characterisation was performed using polarised optical microscopy (POM), using either a Zeiss Axio Imager. A2m microscope equipped with a Linkam THMS600 heating stage or an Olympus BH2 polarising light microscope equipped with a Linkam TMS92 hot stage. Planar aligned cells with an ITO conducting layer and cell thickness of 2.9-3.5 μm were purchased from INSTEC. Molecular modelling The geometric parameters of the dimers studied were obtained using quantum mechanical DFT calculations with Gaussian 09 software [86]. Optimisation of the molecular structures was carried out at the B3LYP/ 6-31 G(d) level of theory. Visualisations of the spacefilling models were produced post-optimisation using the QuteMol package [87]. Table 1 lists the transitional properties of the CB4OABX dimers. All six CB4OABX dimers exhibit nematic behaviour and all, but X = CN and NO 2 , also show the N TB Scheme 1. Synthesis of the CBnOABX dimers. phase. All nematic, N, phases were assigned on the basis of the observation of a schlieren texture containing both two and four brush point singularities and which flashed when subjected to mechanical stress, see Figure 3(a). The values of the scaled nematic-isotropic entropy change, ∆S NI /R, listed in Table 1 are consistent with this assignment [88]. The transition from the nematic to the twist-bend nematic, N TB , phase was accompanied by the cessation of the flickering associated with director fluctuations and the formation of a somewhat ill-defined blocky schlieren texture, Figure 3(b,c). The monotropic nature of the N TB phases precluded their study using X-ray diffraction, and to confirm this assignment, a phase diagram was constructed using binary mixtures of CB4OABOMe and the standard twist-bend nematogen, CB7CB [1], see Figure 4. Complete miscibility was observed for the range of compositions studied, and all the mixtures exhibited two nematic phases, at higher temperatures the conventional N phase and at lower temperatures the N TB phase. These were identified on the basis of the observation of either a characteristic nematic schlieren texture or the blocky N TB schlieren texture, see Figure 5. The value of T N TB N measured for CB4OABOMe fits perfectly the N TB -N line in the phase diagram and confirms the N TB phase assignment. CB5OABX dimers The transitional properties of the CB5OABX dimers are listed in Table 2. All six dimers are enantiotropic nematogens, and the nematic phase was identified using polarised light microscopy as described earlier; a representative texture is shown as Figure 6(a). On cooling the nematic phase of CB5OABMe and CB5OABOBu, a focal conic fan texture developed, see Figure 6(b), indicating the formation of a smectic phase. The monotropic nature of these smectic phases and their tendency to crystallise precluded their study using X-ray diffraction. Comparison of the CBnOABX dimers The melting points of the CBnOABX dimers are compared in Figure 7. For any given X, the melting point is the lowest for the CB6OABX dimer with the exception of X = OMe for which CB6OABOMe melts 5°C higher than CB4OABOMe. This exception to the general trend may be attributed to the favourable mixed mesogenic unit that is facilitated by the longer spacer. It is clear that the even-membered dimers tend to have the highest melting points reflecting the greater ability of the more linear even-membered dimers to pack into a crystalline structure, see Figure 8. The notable exception to this trend is the much higher melting point seen for CB4OABCN compared to CB5OABCN, but the physical significance of this observation is not clear. The different trends in the melting points for a given value of n on varying X may reflect, in part, the role played by the spacer in the packing of the molecules in order to maximise the interaction between the unlike mesogenic units and the ability to form intercalated arrangements [90,91]. It is noteworthy, however, that the dimer containing the butyl terminal group has the lowest melting point for each value of n, and this reflects both the flexibility of the butyl chain, and that it protrudes out of the plane of the phenyl ring to which it is attached [92]. The values of T NI for the CBnOABX dimers show a more regular dependence on X (Figure 9) than do their melting points (Figure 7). Specifically, the values of T NI shown by the CB5OABX dimers are higher than those of the corresponding odd-membered dimers, and this may be understood in terms of the linear shape adopted by even-membered dimers as described earlier and is shown in Figure 8. The values of T NI shown by the dimers with n = 6 are greater than those of the corresponding dimers with n = 4. This reflects the rather general observation that within a homologous series of dimers in which the length of the spacer is varied, the clearing temperature of the odd members tends to pass through the maximum value on increasing spacer length, whereas those of the even members simply fall [75]. This behaviour may be accounted for in terms of the average shapes of the dimers that may be visualised, to a first approximation, in terms of the alltrans conformation, and it is apparent that the shorter odd-membered spacer gives rise to a more pronounced molecular curvature, see Figure 8. On increasing the length of the odd-membered spacer from n = 4 to n = 6, the enhanced flexibility allows the dimer to adopt more linear conformers, and this increases T NI . For long oddmembered spacers, the dilution of the mesogenic units offsets this effect and T NI falls. For a given value of n, the values of T NI on varying X may be understood in terms of how the terminal substituent changes the shape and polarisability of the mesogenic unit to which it is attached. As with the melting points, the lowest values of T NI are observed for the butyl substituent and again, this reflects that the butyl chain protrudes at an angle from the plane of the phenyl ring to which it is attached [93][94][95]. The values of the scaled entropy change associated with the N-I transition, ∆S NI /R, are several times larger for the CB5OABX dimers than for the corresponding odd-membered dimers. This may be understood in terms of the conformational and orientational contributions to the total entropy change. Although in the previous discussion we noted that, to a first approximation, the alternation in the values of T NI on increasing n may be understood in terms of the change in shape as represented by the all-trans conformation, such an explanation does not account for the very much larger values of ∆S NI /R seen for even-membered dimers. Instead, we must remember that the spacer is flexible and that evenmembered dimers have a greater number of conformations in which the two mesogenic units are more or less parallel, and these conformers are preferentially selected by the nematic environment. This gives rise to a greater conformational change at T NI for even-membered than odd-membered dimers. It has been estimated that this accounts for around 20% of the total entropy change seen for an even-membered dimer, whereas this conformational contribution is vanishingly small for an oddmembered dimer [77]. The major contribution to the large difference in ∆S NI /R between odd-and evenmembered dimers, however, arises from the alternation in the long-range orientational order [77,96]. The value of ∆S NI /R increases on passing from n = 4 to n = 6 for any given terminal substituent X, and this reflects the decrease in molecular biaxiality on increasing spacer length [97,98]. This reinforces the view that the shorter odd-membered dimers exhibit a more pronounced molecular curvature. All six members of the CB6OABX dimers exhibit the N TB phase, whereas, as we have seen, just four members of the CB4OABX dimers do, with no N TB phase seen for X = CN and NO 2 , see Table 1. The CB4OABX dimers show lower values of T N TB N than their CB6OABX counterparts by around 12°C, a smaller reduction than seen for T NI of around 17°C. It should be noted that, if this average reduction in T N TB N is applied to the X = CN and NO 2 dimers with n = 4, then the expected values of T N TB N are considerably lower than the lowest temperature to which their nematic phases could be cooled prior to crystallisation, 122°C and 111°C, respectively. It may appear counter-intuitive that the values of T N TB N are lower for the CB4OABX dimers than their more linear CB6OABX counterparts, Figure 8. This indicates that the stability of the N TB phase is not simply associated with molecular curvature. Instead, the increase in molecular flexibility on increasing n facilitates a better interaction between mesogenic groups and this compensates for the loss of entropy due to the additional polar order in the N TB phase [57], counteracting the reduction in molecular curvature, and the stability of the N TB phase increases. This also accounts for the observed increase in the value of T NI . It is noteworthy, however, that although the absolute values of T N TB N increase on moving from n = 4-6, the scaled temperature T N TB N /T NI decreases ( Table 1), indicating that the stability of the N TB phase increases relative to that of the N phase as molecular curvature increases. As expected, the linear even-membered CB5OABX dimers do not exhibit the N TB phase but instead show smectic behaviour for X = Me and OBu, see Table 7. This reflects the greater ease of packing linear molecules into a lamellar phase. BrBnOABX dimers The BrB4OABX dimers did not show liquid crystallinity, and their melting points are listed in Table 3. These are higher than those of the corresponding BrB6OABX dimers [79], and the differences range from 13°C for X = OMe to 35°C for X = CN. The BrB6OABX dimers with X = Me and NO 2 also did not exhibit liquid crystalline behaviour. The remaining four members showed conventional nematic phases and their values of T NI were lower than those of the corresponding CB6OABX dimers by, on average, 32°C. This was attributed to the decrease in structural anisotropy on replacing a cyano group by a bromine atom [99] and the increased tendency for cyanobiphenyl-based materials to associate in an antiparallel fashion further enhancing structural anisotropy [100]. As we saw earlier, reducing n from 6 to 4 resulted in a decrease in T NI , and it would be reasonable to assume that the BrB4OABX dimers would show lower values of T NI than their BrB6OABX counterparts. This coupled with the higher melting points shown by the BrB4OABX dimers accounts for the absence of liquid crystallinity. The transitional properties of the BrB5OABX series are listed in Table 4. All six dimers exhibited an enantiotropic nematic phase identified on the basis of the observation of a characteristic schlieren texture when viewed through the polarised light microscope and a representative texture is shown as Figure 10(a). In addition, on cooling the nematic phase of BrB5OABCN, a fan-like texture developed, indicating the formation of a smectic phase, see Figure 10(b). The rapid crystallisation of this phase precluded its study using X-ray diffraction, although the value of the entropy change associated with the transition strongly suggests a liquid-like smectic phase. The values of T NI are on average 22°C lower for the BrB5OABX dimers compared to those of the corresponding CB5OABX dimers, and this may again be attributed to the change in structural anisotropy and reduced tendency to self-organise in an antiparallel fashion. A smectic phase is observed for BrB5OABCN, whereas only an N phase is seen for CB5OABCN and this may be cooled to much lower temperatures than the value of T SmN seen for BrB5OABCN. The physical significance of this observation is unclear. The values of T NI shown by the BrB5OABCN dimers are, on average, 85°C higher than those of the corresponding Br6OABCN dimers, and this may be attributed to the difference in shape between even and odd-membered dimers as discussed earlier. Conclusions The higher melting points seen for the CB4OABX dimers compared to those of the corresponding CB6OABX materials support the emerging observation that short odd-membered spacers, although promoting molecular bend, paradoxically appear to enhance packing efficiency in the crystal phase. This suggests that the reduction in the melting point on increasing spacer length is associated with the increase in molecular flexibility and, therefore, entropically driven. The values of both T NI and T N TB N are lower for the CB4OABX dimers than for the corresponding CB6OABX materials but the reduction is greater for T NI . Thus, where applicable, the scaled transition temperature, T N TB NI /T NI , is in fact higher for the dimer with the shorter spacer, indicating that direct N TB -isotropic transitions are more likely to be observed for short odd-membered spacers, as appears to be the case in the very limited number of examples observed to date [8][9][10][11]. The challenge is now to design short, odd-membered dimers having low melting points in order to realise an enantiotropic N TB -I transition. Disclosure statement No potential conflict of interest was reported by the author(s).
4,701.2
2023-03-16T00:00:00.000
[ "Physics" ]
Ectopic Expression of Homeobox Gene NKX2-1 in Diffuse Large B-Cell Lymphoma Is Mediated by Aberrant Chromatin Modifications Homeobox genes encode transcription factors ubiquitously involved in basic developmental processes, deregulation of which promotes cell transformation in multiple cancers including hematopoietic malignancies. In particular, NKL-family homeobox genes TLX1, TLX3 and NKX2-5 are ectopically activated by chromosomal rearrangements in T-cell neoplasias. Here, using transcriptional microarray profiling and RQ-PCR we identified ectopic expression of NKL-family member NKX2-1, in a diffuse large B-cell lymphoma (DLBCL) cell line SU-DHL-5. Moreover, in silico analysis demonstrated NKX2-1 overexpression in 5% of examined DLBCL patient samples. NKX2-1 is physiologically expressed in lung and thyroid tissues where it regulates differentiation. Chromosomal and genomic analyses excluded rearrangements at the NKX2-1 locus in SU-DHL-5, implying alternative activation. Comparative expression profiling implicated several candidate genes in NKX2-1 regulation, variously encoding transcription factors, chromatin modifiers and signaling components. Accordingly, siRNA-mediated knockdown and overexpression studies confirmed involvement of transcription factor HEY1, histone methyltransferase MLL and ubiquitinated histone H2B in NKX2-1 deregulation. Chromosomal aberrations targeting MLL at 11q23 and the histone gene cluster HIST1 at 6p22 which we observed in SU-DHL-5 may, therefore, represent fundamental mutations mediating an aberrant chromatin structure at NKX2-1. Taken together, we identified ectopic expression of NKX2-1 in DLBCL cells, representing the central player in an oncogenic regulative network compromising B-cell differentiation. Thus, our data extend the paradigm of NKL homeobox gene deregulation in lymphoid malignancies. Introduction Lymphocytes originate from hematopoietic stem cells located in the bone marrow. While T-cells complete their development in the thymus, B-cells differentiate in various lymphoid tissues. Lymphoid malignancies emerge in the bone marrow or in secondary hematopoietic organs, acquiring both general and subtype specific mutations including chromosomal rearrangements. Accordingly, subtypes of the diffuse large B-cell lymphoma (DLBCL) differ in mutations and gene activities [1]. The sub-classification of this type of hematopoietic cancer represents a milestone in oncological research and has extensive implications for diagnosis and therapy. Two major subtypes, namely germinal center-derived B-cell and activated B-cell, are distinguished within the DLBCL entity [2]. It is believed that additional stratification should contribute to improved and better targeted therapies. Therefore, identification of novel genes or gene networks with diagnostic or therapeutic potential is of clinical interest. Deregulated genes in leukemia/lymphoma comprise activated transcription factors (TFs) and signaling components which are either physiologically expressed in early stages of hematopoietic development or ectopically induced. Notable examples include TFs of the basic helix-loop-helix (bHLH) family or constituents of the NOTCH-signaling pathway [3]. The NOTCH gene itself may be activated by rare chromosomal translocations in T-cell acute lymphoblastic leukemia/lymphoma (T-ALL) and by mutations affecting both T-ALL and B-cell malignancies. Targets of NOTCH-signaling comprise MYC and bHLH genes HES1 and HEY1 which may represent key oncogenes in malignant transformation [4]. Homeobox genes encode transcription factors frequently deregulated in cancers, including leukemia/lymphoma, impacting developmental processes during embryogenesis. According to their conserved homeobox sequences, this group of TFs has been classified into several subfamilies [5]. NKL family members regulate mesodermal differentiation and organogenesis [6], including NKX2-1 which regulates development of lung and thyroid, together with NKX2-5 and NKX3-1 which regulate that of the heart and prostate, respectively [7][8][9][10]. NKL-family members are involved in T-ALL [11], where activation usually follows chromosomal juxtaposition to potent transcriptional enhancers cognate to T-cell receptor genes at 7p14, 7q35 and 14q11, or the TF encoding gene BCL11B at 14q32 [12]. Exceptional, NKL family member NKX3-1 is ectopically expressed in T-ALL cells by the activating TFs TAL1, LYL1 and MSX2 rather than cytogenetically [13,14]. On the other hand the clustered HOX genes are usually activated by formation of aberrant chromatin structures in leukemia/lymphoma, although chromosomal aberrations are described in T-ALL [15]. Specific covalent modifications of core histones mediated by mutated MLL represent the most frequent mechanism of chromatin deregulation activating this homeobox gene group, including HOXA5 and HOXA10 [16]. MLL encodes a histone H3 methyltransferase and is associated with many cofactors in a ternary complex. Moreover, several genes encoding these cofactors are involved in fusion configurations with the MLL gene [17]. Here, we investigate aberrant expression of NKL homeobox gene NKX2-1 in B-cell lymphoma cell line SU-DHL-5. Our data expand the oncogenic role of NKL homeobox genes within the lymphoid system encompassing the B-cell lineage. We demonstrate mechanisms of NKX2-1 activation in addition to examining its downstream effects which include deregulation of cell differentiation in DLBCL. Expression profiling For quantification of gene expression via profiling, we used data obtained by gene chips HG U133 Plus 2.0 from Affymetrix (High Wycombe, UK). The datasets were generated at the University of Würzburg and generously provided by Prof. Andreas Rosenwald (Institute of Pathology, University of Würzburg, Germany) or obtained from the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) (www.ncbi.nlm.nih. gov/gds) or from the National Cancer Institute (NCI). The NCI microarray data for SU-DHL-5 are available through the accession numbers SU-DHL-5_SS392729_HG-U133_Plus_2_HCHP-201545_.CEL, SU-DHL-5 _SS392730_HG-U133_Plus_2_HCHP-201546_.CEL, and SU-DHL-5_SS392731_HG-U133_Plus_2_HCHP-201547_.CEL which were combined in this study. Analyses of expression data were performed using Microsoft Excel and online programs. For creation of heat maps we used CLUSTER version 2.11 and TREEVIEW version 1.60 (http://rana.lbl.gov/EisenSoftware. htm). Expression data of 203 DLBCL patient samples were obtained from the NCBI GEO database (accession number GSE11318) as published recently [20]. Statistical analyses of NKX2-1 (dataset 211024_s_at) expression were performed using R-software. Genomic array analysis Genome-wide copy number analysis was performed using the Affymetrix Genotyping Console GTC Software version 4.0 (Affymetrix) and visualized by the Affymetrix GTC-Browser program. The 500K-array dataset for SU-DHL-5 was obtained from the National Cancer Institute (Bethesda, MD, USA), GSK Cancer Cell Line Genomic Profiling Data (https://cabig.nci.nih. gov/community/caArray_GSKdata/). Real-time quantitative expression analysis (RQ-PCR) was performed by the 7500 Fast Real-time System, using commercial buffer and primer sets (Applied Biosystems, Darmstadt, Germany). For normalization of expression levels we used TATA box binding protein (TBP). Quantitative analyses were performed in triplicate and repeated twice. The standard deviations are indicated in the figures as bars. Analysis of DNA methylation SU-DHL-5 cells were treated with 10 mg/ml trichostatin A (TSA, Sigma) or 10 mM 5-Aza-29-deoxycytidine (Sigma) for 20 h before being examined by RQ-PCR. To identify particular methylated cytidines within the CpG island at the HOPX locus, genomic DNA of SU-DHL-5 and SU-DHL-4 (for control) was subjected to bisulfite-conversion and analyzed as described recently [24]. The converted DNA was amplified by PCR using oligonucleotides as listed in Table S1. The PCR products were subcloned into pGEMT-easy (Promega, Madison, WI, USA) and the inserts sequenced (MWG Eurofins). Sequences of 10 clones were displayed and compared using BiQ Analyzer (http://biqanalyzer.bioinf.mpg.de). Ectopic expression of NKL homeobox gene NKX2-1 in DLBCL cell line Here, we screened by expression profiling 20 leukemia/ lymphoma T-and B-cell lines for aberrant activities of NKLfamily homeobox genes which are implicated in T-cell leukemia [11]. Our findings confirmed expression of NKL family members in T-ALL cell lines, namely: TLX1 in ALL-SIL, TLX3 in HPB-ALL, NKX2-5 in PEER and CCRF-CEM, and NKX3-1 in JURKAT, PER-117 and RPMI-8402 (data not shown). In addition, we identified conspicuous expression of NKX2-1 and NKX3-1 in DLBCL cell line SU-DHL-5. Understanding the unexpected activation of NKL genes in a malignant B-cell line was the main focus of this enquiry. Quantitative expression analysis of NKX2-1 and NKX3-1 by RQ-PCR confirmed their activation in SU-DHL-5 at the RNA level. While NKX3-1 was also expressed in Hodgkin lymphoma (HL), multiple myeloma (MM), B-cell lymphoma (BCL) and additional DLBCL cell lines, NKX2-1 was nearly undetectable in the same set of 25 lymphoma cell lines (Fig. 1A), prompting analysis of NKX2-1 and NKX3-1 in primary cells, including thyroid, lung and particular hematopoietic samples to chart physiological tissue specificity in the blood system. The results confirmed physiological NKX2-1 expression in tissues of the thyroid and the lung, while in all hematopoietic samples analyzed NKX2-1 transcription was undetectable (Fig. 1B). On the other hand, NKX3-1 expression was confirmed in the cell lines SU-DHL-5, JURKAT, PER-117 and LNCAP (prostate) and physiologically in tissues of the thyroid and the lung, while the hematopoietic samples of B-cells and BM showed just weak activity (Fig. 1C). Protein expression of NKX2-1 and NKX3-1 was analyzed by Western blot. While NKX2-1 protein was clearly displayed in SU-DHL-5, NKX3-1 protein was not detectable in that cell line ( Fig. 1B,C), consistent with post-transcriptional inhibition which was described recently [25]. For this reason we focused our work on regulation and function of homeobox gene NKX2-1, showing ectopic expression at the RNA and protein level in DLBCL cell line SU-DHL-5. To examine the expression of NKX2-1 in primary material we have checked 204 datasets of untreated DLBCL cases deposited in the GEO database of the NCBI belonging to the study of Lenz and coworkers [20]. This screening revealed statistically significant enhanced NKX2-1 activity in 11 (5%) DLBCL patients (Fig. 1D), supporting the relevance of this oncogenic homeobox gene expression in this malignancy. Of note, the NKX2-1 overexpressing DLBCL patients showed no clear correlation with known disease subsets. Since deregulated expression of NKL homeobox genes in T-ALL is primarily caused by chromosomal aberrations, we analyzed the karyotype of SU-DHL-5 by SKY, FISH and in a genomic array with respect to the NKX2-1 gene which is located at 14q13 (36.9 Mb). However, copy number data ( Fig. 1E), gene-specific FISH results (not shown) and SKY results ( Fig. 1F) all returned wild type configurations at this locus, discounting a chromosomal mechanism behind the deregulated transcription. Turning to potential transcriptional regulators which might induce aberrant NKX2-1 activity, we compared expression array data of SU-DHL-5 with 3 control DLBCL cell lines -SU-DHL-4, SU-DHL-10, and SU-DHL-16. After inspection of the top 1000 up-and downregulated genes in SU-DHL-5, potential candidates were shortlisted and functionally categorized as shown in Table 1. This exercise revealed conspicuous involvement of TFs, chromatin and signaling genes which were then subjected to more detailed consideration. Transcription factors HEY1 and NKX2-1 are mutually exciting Among TFs bHLH factor HEY1 was particularly intriguing (one of the top 10 upregulated genes). RQ-PCR analysis of a panel of cell lines confirmed high expression levels in SU-DHL-5 ( Fig. 2A). However, MM and HL cell lines also showed moderate transcription of HEY1. RQ-PCR analysis of SU-DHL-5 in comparison to primary cells revealed prominent HEY1 expression in lung, thyroid, and in BM, LN and thymus. These data suggest a functional role of HEY1 in early lymphopoiesis. Peripheral B-cells and PBMCs lacked HEY1 transcripts ( Fig. 2A), indicating To analyze the regulatory impact of HEY1 on NKX2-1 expression we treated SU-DHL-5 cells with siRNA directed against HEY1. Subsequent quantification of HEY1 and NKX2-1 expression demonstrated reduction of both transcripts as compared to cell samples treated with control siRNA (Fig. 2B). Furthermore, overexpression of HEY1 or the related TF HES1 was followed by increased NKX2-1 expression (Fig. 2B). Together, these results show that HEY1 contributes to NKX2-1 expression in DLBCL cell line SU-DHL-5. That HEY1 acts as a transcriptional repressor suggests indirect activation of NKX2-1 probably via inhibition of negative regulators as shown below [26]. Sequence analysis of the promoter region of HEY1 identified 3 potential binding sites for NKX2-1 (Fig. 2C), indicating direct regulatory impact of this homeoprotein in HEY1 expression. Subsequent siRNA-mediated knockdown of NKX2-1 inhibited transcription of NKX2-1 and HEY1, confirming regulation by NKX2-1 (Fig. 2C). ChIP analysis using anti-NKX2-1 confirmed direct binding of NKX2-1 to the promoter region of HEY1 (Fig. 2C). These data show that NKX2-1 activates HEY1 transcription directly. Genomic copy number data and SKY analyses excluded genomic aberrations at the HEY1 locus at 8q21 (Fig. S1), highlighting the role of NKX2-1 in HEY1 regulation. HEY1 and HES1 are prominent targets of the NOTCHpathway in lymphopoiesis [4]. To analyze the potential impact of NOTCH on HEY1 expression we treated SU-DHL-5 cells with the c-secretase inhibitor DAPT. Subsequent RQ-PCR analysis showed reduced HEY1 levels in treated samples (Fig. 2D), confirming NOTCH regulation. Accordingly, siRNA-mediated knockdown of NOTCH corepressor SPEN enhanced HEY1 expression more than twofold (Fig. 2D). Additional findings using DAPT discounted regulation of NKX2-1 by NOTCH-signaling (Fig. 2D). Expression of Zn-finger homeobox gene 2 (ZHX2) showed elevated levels in SU-DHL-5 as well ( Table 1). Comparative RQ-PCR analysis confirmed high transcript levels in SU-DHL-5, even surpassing primary B-cells (Fig. 2E). Of note, we recently showed reduced expression of ZHX2 in HL cell line L-1236 and an activating input of homeobox gene MSX1 [27]. Accordingly, siRNA-mediated knockdown of MSX1 in SU-DHL-5 reduced transcription of ZHX2 but not of HEY1 (Fig. 2E), contrasting with stimulation of HEY1 by the closely related homeobox gene MSX2 in T-ALL cells [28]. SiRNA-mediated knockdown of homeobox gene NKX2-1 reduced transcript levels of HEY1 as shown above while sparing ZHX2 (Fig. 2E), thus discounting direct regulation of ZHX2 by NKX2-1. However, the precise mechanism of ZHX2 enhancement remains unclear. Aberrant chromatin structures mediate NKX2-1 expression The catalogue of upregulated genes in SU-DHL-5 encoding chromatin components included prominently MLL which is frequently deregulated in leukemia where it activates homeobox genes of the clustered type via H3K4-trimethylation [16]. Quantification of MLL expression in cell lines confirmed elevated RNA levels in SU-DHL-5 in addition to HL cells (Fig. 3A). SiRNA-mediated knockdown of MLL inhibited expression of NKX2-1 but not of HEY1 (Fig. 3B), showing that MLL supports NKX2-1 expression. To analyze corresponding histone modifications at the promoter-regions of NKX2-1 and HEY1 we performed ChIP using antibodies for activatory H3K4me3 (mediated by MLL) and inhibitory H3K27me3 (mediated by EZH2). Analysis of the NKX2-1 promoter showed presence of H3K27me3 in both SU-DHL-4 and SU-DHL-5, while H3K4me3 was restricted to SU-DHL-5 (Fig. 3B). The HEY1 promoter exclusively bore H3K27me3 in both cell lines (Fig. 3B). The presence of both types of analyzed H3-trimethylations at the NKX2-1 locus in SU-DHL-5 indicates aberrant bivalent chromatin configuration known to prime developmental genes for activation and thus likely to favor NKX2-1 expression in this cell line [29]. Moreover, profiling data indicated overexpression of the MLL complex component AF9 and reduced expression of ASB2 which mediates degradation of MLL (Table 1) [17,30]. In leukemia MLL is frequently activated by chromosomal aberrations at 11q23 resulting in amplifications or diverse fusion genes [31]. We looked for chromosomal rearrangements in SU-DHL-5 by genomic profiling, finding duplication of MLL accompanied by deletion of its immediately telomeric region (Fig. 3C). FISH analysis confirmed that MLL gain was coupled with the downstream deletion (Fig. 3D). SKY results excluded chromosomal translocations at 11q23 (Fig. 1D), and RT-PCR analysis of the most prolific MLL-fusion transcripts excluded cryptic fusions with AF4, AF6, AF9, AFX and ENL (Fig. 3E). Collectively, our results show that genomic copy number gain of wild type MLL underlies overexpression of MLL in SU-DHL-5 cells. Furthermore, overexpression of histone H3E ( Table 1) correlated with rearrangements of chromosome 6 targeting histone gene cluster 1 (HIST1) at 6p22 as indicated by genomic array data (Fig. 3F). FISH analysis using probes covering HIST1 combined with a painting probe for chromosome 6 confirmed the breakpoint nearby, lying nevertheless outwith the gene cluster (Fig. 3G). The FISH results are consistent with the SKY data showing two derivative chromosomes 6 harboring deletions at 6p22 and 6q13, respectively (Fig. 1D, 3F). Quantification of several histone RNA species demonstrated abundant expression of H1C, H2BB, H3E and H3H in SU-DHL-5 as compared to SU-DHL-4 (Fig. 3H), consistent with coordinate gene activation at the histone locus at 6p22 by nearby chromosomal rearrangement. Accordingly, SU-DHL-5 contained higher levels of core-histone proteins as shown by SDS-PAGE (Fig. 3H). Moreover, histone H2B when analyzed by Western blot showed raised protein expression and enhanced modification with a single ubiquitin at lysine 120, subsequently named H2Bub1 (Fig. 3H). The latter is of special interest because H2Bub1 reportedly promotes trimethylation of H3K4 by MLL [32], suggesting collaborative activities of the chromosomal aberrations targeting MLL and HIST1. H2B ubiquitinylation levels are regulated by counteracting ubiquitin-transferases (RNF20, RNF40) and ubiquitin-specific proteases (USPs) [33]. Profiling data indicated repression of USP46 in SU-DHL-5 as confirmed by RQ-PCR analysis in comparison to control cell lines (Fig. 4A). However, USP46 expression was not regulated by the transcriptional repressor HEY1, as analyzed by knockdown and overexpression experiments (Fig. 4A). RQ-PCR analysis of RNF20 and RNF40 showed upregulation in SU-DHL-5 relative to control cell lines (Fig. 4B). Knockdown of RNF20 and RNF40 by siRNA treatment inhibited expression of NKX2-1 (Fig. 4C) demonstrating that these H2B ubiquitin-transferases support expression of that homeobox gene. Moreover, siRNA-mediated knockdown of NKX2-1 inhibited expression of RNF40 (Fig. 4C), showing that expression of RNF40 is supported by NKX2-1 and thus presence of reciprocal regulation. These results prompted us to look for additional deregulated histone modifiers which may contribute to the aberrant chromatin structure at NKX2-1. Inhibitory trimethylation of H3K27 is conducted by polycomb repressor complex (PRC) 2 (containing EZH2, JARID2, HOPX, E2F6), and is removed by histone methylase JMJD3 [34,35]. The expression level of JMJD3 was elevated in SU-DHL-5 as shown by RQ-PCR analysis (Fig. 4D). However, JMJD3 was not regulated by NKX2-1 (Fig. 4D). The expression of EZH2 was not significantly altered in SU-DHL-5 as compared to control cell lines (Fig. 4E). Nevertheless, treatment of SU-DHL-5 with EZH2/PRC2 inhibitor DZNep resulted in enhanced transcription of both NKX2-1 and HEY1 (Fig. 4E). These results are consistent with our ChIP data, showing presence of EZH2-mediated H3K27me3 at the promoter regions of both genes (Fig. 3B). Although microarray expression of the gene encoding PRC2 component JARID2 was normal, genomic array data showed monoallelic deletion at 6p22 (Fig. 3F). Sequence data of SU-DHL-5 cells (provided by the BROAD Institute, Table 1) showed mutation of JARID2. Thus SU-DHL-5 is hemizygous for mutated JARID2. Homeobox only protein (HOPX) is associated with PRC2, regulating its capacity for repression [36,19]. SU-DHL-5 showed remarkably high levels of HOPX expression as indicated by profiling data ( Table 1) and confirmed by RQ-PCR results of lymphoma cell lines (Fig. 4F). SiRNA-mediated knockdown of HOPX resulted in reduced expression of NKX2-1 (Fig. 4G), showing that HOPX activates NKX2-1 transcription. But this activation was not reciprocal, since reduction of NKX2-1 was unaccompanied by altered HOPX levels (Fig. 4G). However, genomic copy number data and SKY results excluded genomic aberrations at the HOPX locus (Fig. S2). Moreover, examination of DNA methylation of a conspicuous CpG island at the HOPX locus (CpG 109) excluded an abnormal configuration (Fig. S3), leaving the mechanism of this striking overexpression elusive. HOPX is associated with E2F6 which showed reduced expression levels in SU-DHL-5 (Fig. 4H). Interestingly, this reduction was mediated by HEY1 as shown by knockdown experiments which led to overexpression of E2F6 (Fig. 4H). Together, our results demonstrate a significant role for several chromatin modifiers underlying NKX2-1 activation in SU-DHL-5. These regulatory interactions are partly reciprocal and constitute feedback-loops, probably resulting in enhanced and stabilized gene activities. Treatment of SU-DHL-5 cells with TNFa for 1 h or 4 h resulted in activated transcription of both NKX2-1 and HEY1 (Fig. 5A). In accordance with this result, treatment with NFkB inhibitor reduced expression of both genes (Fig. 5B), demonstrating a positive role for TNFa/NFkB-signaling in transcriptional regulation. Furthermore, expression of protein kinase C variant E (PRKCE) was enhanced in SU-DHL-5 when compared to control cell lines (Table 1, Fig. 5C). This may be of interest because the activity of NFkB is regulated by PRKC. SiRNA-mediated reduction of NKX2-1 was accompanied by downregulation of PRKCE (Fig. 5C), indicating a positive regulatory role for this NKL homeobox gene. Sequence analysis of the upstream region of PRKCE (UCSC genome browser, release GRCh37/hg19) revealed a binding site for NKX2-1 at -25.832 bp. ChIP analysis of this site confirmed binding of NKX2-1 antibody, demonstrating direct activation of PRKCE by NKX2-1 (Fig. 5C). This kind of gene regulation represents positive feedback activation which activates involved genes, demonstrating a complex gene regulatory network contributing to ectopic NKX2-1 expression. Treatment of SU-DHL-5 cells with TGFb resulted in enhanced transcription of both NKX2-1 and HEY1 after 16 h (Fig. 5D). Interestingly, after 1 h of treatment the expression of HEY1 raised concentration-dependend while NKX2-1 showed no change in transcript levels at this time point (Fig. 5D). These results may indicate that HEY1 is a direct target of TGFb-signaling, contrasting with the delayed and thus indirect mechanism of NKX2-1 regulation. However, treatment of SU-DHL-5 with inhibitory anti-TGFB showed no effect, excluding autocrine activation (Fig. 5E). SiRNA-mediated knockdown of SMAD3 or SMAD9 reduced transcription of HEY1 while that of NKX2-1 remained unperturbed (Fig. 5F), supporting the direct regulation of HEY1 by TGFb/SMAD-signaling. Of note, according to the UCSC genome browser (release GRCh37/hg19) the promoterregion of HEY1 contains binding-sites for SMAD proteins which are colocated with those of NKX2-1 (Fig. 2C): a significant observation, since SMAD3 protein has been shown to interact with NKX2-1 [37]. Immunostaining of NKX2-1 and SMAD3 in SU-DHL-5 cells consistently revealed colocalization of both TFs (Fig. 5F). Therefore, our results indicate that in SU-DHL-5 cells NKX2-1 and SMAD3 coactivate HEY1 by protein-protein interaction and direct binding to the promoter region. The impact of additional pathways in expression of NKX2-1 and HEY1 was analyzed by treatment of SU-DHL-5 cells with IL4, BMP4, IL10 and WNT5B for 16 h (Fig. 5G). The most significant effects on NKX2-1 were observed after stimulation with IL4 and for HEY1 with BMP4. BMP4 signaling is mediated by SMAD proteins just like TGFb, showing consistent results for HEY1 regulation. IL4-signaling is mediated by STAT3 which is upregulated in SU-DHL-5 (Table 1). Accordingly, siRNA-HMGN3, AKAP7 and MAP3K4. Insert shows chromosomes 6 analyzed by SKY karyotyping indicating rearrangements at both chromosomes. (G) FISH analysis of the histone gene cluster HIST1 at 6p22 (below) using painting probe and BACs as indicated above. (H) RQ-PCR analysis of selected histone genes in SU-DHL-5 and SU-DHL-4 for control (left). PAGE analysis of histone proteins in three DLBCL cell lines (middle) demonstrates elevated levels in SU-DHL-5. Western blot analysis of H2B, H2Bub1 and ERK (for control) in four DLBCL cell lines (right) demonstrates elevated levels in SU-DHL-5. doi:10.1371/journal.pone.0061447.g003 NKX2-1 in DLBCL PLOS ONE | www.plosone.org mediated knockdown of STAT3 reduced expression of NKX2-1 while HEY1 transcription was not significantly affected (Fig. 5G). These results support an activating role for IL4/STAT3-signaling on NKX2-1 and for BMP4/SMAD-signaling on HEY1 in SU-DHL-5. Finally, we analyzed the impact of cAMP/cGMP-signaling in expression of NKX2-1 and HEY1. This pathway stood out due to overexpression of NOS1 (synthesizes nitric oxide activating guanylate cyclase) and reduced expression levels of several PDEs as indicated by comparative profiling data ( Table 1) and illustrated by a heatmap for PDE-expression (Fig. 5H). Furthermore, overexpressed DDAH1 ( Table 1) encodes an inhibitor of the negative regulator ADMA for NOS1. Therefore, SU-DHL-5 cells were treated with cAMP, cGMP and cGMP-specific PDEinhibitor sildenafil and subsequently quantified for NKX2-1 and HEY1 transcription (Fig. 5I). After 4 h expression of NKX2-1 rose significantly after treatment with sildenafil, while HEY1 rose with cAMP. After 16 h expression of NKX2-1 peaked in response to treatment with sildenafil and cGMP, while HEY1 responded maximally to treatment with cAMP and to a lesser extent with cGMP and sildenafil (Fig. 5I). These data suggest that expression of NKX2-1 and HEY1 is primarily regulated via cGMP and cAMP, respectively. Accordingly, siRNA-mediated knockdown of cGMP-specific PDE6D resulted in enhanced expression of NKX2-1 while HEY1 remained unchanged (Fig. 5K). The enhanced expression of NOS1 in SU-DHL-5 as compared to control cell lines was confirmed by RQ-PCR analysis (Fig. 5K). SiRNAmediated knockdown of NKX2-1 resulted in strong reduction of NOS1, indicating an activatory role for NKX2-1 (Fig. 5K). However, ChIP analysis excluded direct binding of NKX2-1 to the promoter region of NOS1 (data not shown), suggesting an indirect activation mechanism. Finally, the transcriptional repressor HEY1 was found to underly PDE4A repression, while non-participant in PDE6D regulation, as shown by overexpression experiments (Fig. 5L). Thus, deregulated expression of PDEs (PDE6D, PDE4A) and NOS1 via HEY1 and NKX2-1 indicate feedback regulation of both TFs. Discussion In DLBCL cell line SU-DHL-5 we have identified ectopic expression of NKX2-1 which is activated by bHLH TF HEY1, aberrant modifications of the chromatin structure, and particular signaling pathways. NKX2-1 belongs to the NKL family of homeobox genes which is implicated in the tumorigenesis of T-ALL [11,38]. In silico expression analysis of patient samples indicated aberrant activity of NKX2-1 in 5% of DLBCL, representing a hitherto unrecognized subgroup of this disease. Therefore, our results expand the oncogenic role of this gene family within the entity of lymphoid malignancies. NKX2-1 is physiologically expressed in the developing lung and thyroid but not, as shown here, in hematopoietic cells. In a physiological context NKX2-1 regulates differentiation processes both during embryogenesis and in the adult [7,8]. In lung cancer NKX2-1 performs the role of a lineage-specific oncogene enhancing proliferation and survival [39,40]. Overexpression of NKX2-1 mediated by genomic amplification enhances tumorigenicity of lung cancer cells as evidenced by colony formation of lung epithelial cells and advanced malignancy in affected patients [41]. Furthermore, NKX2-1 enhances together with FOXA1 survival in lung adenocarcinoma by transcriptional activation of LMO3 [42]. However, we have neither experimentally assayed tumorigenicity, nor survival of SU-DHL-5 cells, but our comparative expression data gave no hint for deregulation of proliferation or apoptosis. Accordingly, SU-DHL-5 showed no increased expression of LMO3, suggesting absence of this particular survival-pathway. Rather, the profiling data of SU-DHL-5 identified TFs and signaling pathways highlighting the view of deregulated cell differentiation mediated by NKX2-1. Our data ruled out aberrant activation of NKX2-1 via chromosomal rearrangements contrasting with the picture of NKL homeo-oncogenes in T-ALL. We identified several (deregulated) genes involved in NKX2-1 expression by comparative profiling and subsequent knockdown and overexpression studies. Among TFs we identified activating HEY1 which underlies NKX2-1 transcription. HEY1 belongs to the inhibitory subgroup of bHLH proteins, deregulation of which promotes the development of leukemia/lymphoma affecting the function of E2A in driving lymphoid development [26,36,[43][44][45]. Our data show direct activation of HEY1 by NKX2-1 and indirect activation of NKX2-1 by repressive HEY1. HEY1 is physiologically expressed in developing lung tissue like NKX2-1 [7,46]. Therefore, this regulatory role may also figure in the physiological context of the lung. However, forced expression of HEY1 in DLBCL cell lines did not induce NKX2-1 transcription, indicating that additional factors or chromatin modifications are necessary for the gene activity as described below. MLL contributes to enhanced expression of NKX2-1 in SU-DHL-5 cells. It is overexpressed in this cell line via chromosomal rearrangements resulting in duplication of the wild type gene. Tandem triplication of MLL has been described in intravascular large B-cell lymphoma suggesting a more widespread oncogenic role in B-cell lymphomas than hitherto supposed [47]. The MLL gene encodes a methyltransferase which modifies histone H3 (H3K4me3). This modification marks active chromatin and gene transcription [17]. The presence of both activatory H3K4me3 and inhibitory H3K27me3 as detected here in SU-DHL-5 at NKX2-1 has been termed ''bivalent chromatin modification''-a structure which primes developmental genes for activation in embryonal stem cells [29]. Therefore, this histone-mark may represent one of the basic factors predisposing to ectopic NKX2-1 activation. Additionally, overexpression of core-histones in SU-DHL-5 coincided with a chromosomal aberration at 6p22 housing the histone gene cluster 1. H2B in particular was shown to be overexpressed and strongly ubiquitinated at position K120. This modification guides and enhances the process of MLL-mediated H3K4-methylation, indicating cooperation of both types of chromosomal rearrangements in NKX2-1 expression in SU-DHL-5 [32]. Several enzymes performing histone-modifications were deregulated in SU-DHL-5, contributing to a permissive chromatin structure at the NKX2-1 gene. RNF and USP genes encode ubiquitin-specific transferases and proteases, respectively, regulating ubiquitination of histone H2B [33]. Our results demonstrate, in addition to aberrant expression, their impact on deregulating NKX2-1 transcription. Accordingly, NKX2-1 has been described as a target gene of RNF20 in HELA cells [48]. PRC2 contains H3methyltransferase EZH2 and the modulating components HOPX and E2F6 [49,50]. Expression levels of HOPX and E2F6 were altered in SU-DHL-5 and functional analyses demonstrated their impact in NKX2-1 regulation. However, while HOPX expression is activated by NKX2-1 in lung cancer cells, it was not regulated by NKX2-1 in SU-DHL-5 cells [51]. Of note, both HOPX and E2F6 are overexpressed in HL indicating the presence therein of deregulated chromatin structures, albeit distinct from those in SU-DHL-5 [52,23]. Interestingly, in SU-DHL-5 many genes encoding deregulated histone modifiers are influenced by NKX2-1 or HEY1 in their expression levels, revealing a reciprocal network which mutually reinforces aberrant oncogene activities. As well as TFs and chromatin-modifiers we identified signaling pathways regulating NKX2-1 expression: firstly, TNFa, NFkB and the NFkB-activating kinase PRKCE were involved in activation of both NKX2-1 and HEY1; second, IL4/STAT3-signaling which was primarily engaged in the activation of NKX2-1; and finally, TGFb/BMP4 and SMAD3 which activated transcription of just HEY1. This last named activity may explain both the presence of neighboring binding sites for NKX2-1 and SMAD seen at the HEY1-promoter, and nuclear colocalization of NKX2-1 and SMAD3 in SU-DHL-5, indicating cooperative activation of HEY1. This coactivation may represent a switch-like regulation which stabilizes gene activities [53]. Furthermore, enzymes regulating levels of cGMP and cAMP were identified as respective mediators of NKX2-1 and HEY1 expression, including NOS1 and specific PDEs. Mutated genes of this pathway in SU-DHL-5 include PDE4DIP and AKAP12. Of note, while reduced levels of PDEs were identified here in a DLBCL cell line, enhanced levels of PDE5A have been reported in HL cells [54], suggesting that PDE-activity may be critical for lymphomagenesis. Furthermore, sequence data revealed several mutated MAP kinases, e.g. MAP3K14, MAP2K1, and MAPK4. However, their impact in NKX2-1 expression was not considered in this study. Our data plot the emergence of an aberrant gene regulatory network with NKL homeobox gene NKX2-1 occupying a central role (Fig. 6). It comprises several network modules with feed-back motifs. The functional data indicate that these modules contribute to an enhancement and stabilization of NKX2-1 expression. In SU-DHL-5 deregulated chromatin may represent the initial step in NKX2-1 activation and subsequent cell transformation. According to such a model, chromosomal aberrations enhancing MLL and histones poised chromatin at the NKX2-1 locus for activation which subsequently regulates HEY1. This combination of modified chromatin and ectopic expression of transcriptional regulators represents an alternative mechanism of aberrant NKL homeobox gene activation in lymphoid malignancy. In T-ALL deregulation of select NKL family genes is typically effected by chromosomal alterations which juxtapose enhancer elements cognate to T-cell receptor genes or BCL11B [12]. In the case of NKX3-1, however, activation in T-ALL is controlled by particular deregulated hematopoietic TFs (TAL1, LYL1, MSX2) where aberrant chromatin structures may also participate [13,14]. Transdifferentiation or reprogramming of cells is practicable in several cell types including hematopoietic cells and may be effected by forced expression of particular TFs [55]. For example CEBPA and GATA1 drive the differentiation into macrophages and megakaryocytes, respectively [56,57], and NKX2-1 together with PAX8 mediate differentiation of embryonic stem cells into thyroid cells [58]. However, aberrant or ectopic expression of oncogenic (cell-type specific) TFs does not result in transdifferentiation of the tumor cells. These TFs rather disturb the physiological process of terminal differentiation, resulting in developmental arrest at immature stages. Explanations for the preference for differentiation arrest instead of reprogramming may be the cellular context as described for the TF TAL1 in T-ALL which resides at different binding sites in normal and leukemic cells or the need of stagespecific coregulators in addition to master factors [59,60]. It is also likely that transdifferentiation requires permissive chromatin states normally present in embryonic cells which may be partially recapitulated in adult cells by treatment with histone methyltransferase inhibitors [61]. Noteworthy in this context is that the expression level of NKX2-1 was about 8-fold higher in primary physiological tissues as compared to SU-DHL-5. This scale of difference has been recognized for deregulated NKX2-5 and NKX3-1 in T-ALL as well [14], suggesting oncogenic actions of ectopic NKL homeobox genes at low expression levels instead of driving differentiation at higher levels. This interrelation suggests that enhancement of ectopic oncogene expression (e.g. NKX2-1 in DLBCL) may result in transdifferentiation of the lymphoma cells into benign non-hematopoietic cells, representing a novel concept for cancer therapy. Taken together, we have identified aberrant expression of NKL homeobox gene NKX2-1 in subsets of DLBCL which is mediated by particular factors including TFs, chromatin mediators, and signaling components. This result expands the oncogenic role of this homeobox gene family within the group of lymphoid malignancies. However, diagnostic and/or therapeutic potentials require additional examinations. Nevertheless, our data may also be of interest for analyses and assessment of NKX2-1 in lung and thyroid cancer.
7,516.6
2013-04-29T00:00:00.000
[ "Biology" ]
On the Normalized Laplacian Spectrum of the Linear Pentagonal Derivation Chain and Its Application : A novel distance function named resistance distance was introduced on the basis of electrical network theory. The resistance distance between any two vertices u and v in graph G is defined to be the effective resistance between them when unit resistors are placed on every edge of G . The degree-Kirchhoff index of G is the sum of the product of resistance distances and degrees between all pairs of vertices of G . In this article, according to the decomposition theorem for the normalized Laplacian polynomial of the linear pentagonal derivation chain QP n , the normalize Laplacian spectrum of QP n is determined. Combining with the relationship between the roots and the coefficients of the characteristic polynomials, the explicit closed-form formulas for degree-Kirchhoff index and the number of spanning trees of QP n can be obtained, respectively. Moreover, we also obtain the Gutman index of QP n and we discovery that the degree-Kirchhoff index of QP n is almost half of its Gutman index. Introduction Throughout this paper, we handle a simple, finite, and undirected graph.Let G = (V(G), E(G)) be a graph with vertex set V(G) = {v 1 , v 2 , . . ., v n } and edge set E(G).For v i ∈ V(G), let N G (v i ) be the set of neighbors of v i in G.In particular, d i = |N G (v i )| is the degree of v i in G.The adjacency matrix of G, written as A(G), is an n × n matrix whose (i, j)-entry is 1 if v i v j ∈ E(G) or 0 otherwise.The Laplacian matrix L(G) = D(G) − A(G), where D(G) = diag(d 1 , d 2 , . . ., d n ) is the diagonal matrix of G whose diagonal entry d i is the degree of v i for 1 ≤ i ≤ n. The normalized Laplacian matrix [1] of a graph G, L(G), is defined to be with the convention that D(G) −1 (i, i) = 0 if d i = 0. Since the normalized Laplacian matrix is consistent with the eigenvalues in spectral geometry and random walks [1], it has attracted more and more researchers' attention.From the definition of L(G), it is easy to obtain that: For an n × n matrix M, we denote the characteristic polynomial det(xI n − M) of M by Φ M (x), where I n is the identity matrix of order n.In particular, for a graph G, Φ L(G) (x) (respectively, Φ L(G) (x)) is the Laplacian (respectively, normalized Laplacian) characteristic polynomial of G, and its roots are the Laplacian (respectively, normalized Laplacian) eigenvalues of G.The collection of eigenvalues of L(G) (respectively, L(G)) together with their multiplicities are called the L-spectrum (respectively, L-spectrum) of G. For a graph G, the distance between vertices v i and v j on G is defined as the length of the shortest path between the two vertices, denoted d ij .One famous distance based parameter called the Wiener index [2], which is defined as the sum of the distances between all the vertices on the graph, was given by W(G) = ∑ i<j d ij .For more studies on the Wiener index, one may be referred to [3][4][5][6][7][8].In 1994, Gutman presented an index based on degree and distance of vertex, Gutman index [9], which is Gut(G) = ∑ i<j d i d j d ij .He also showed that when G is an n-order tree, the close relationship between the Wiener index and the Gutman index is Gut(G) = 4W(G) − (2n − 1)(n − 1). Based on electrical network theory, Klein and Randić [10] proposed a novel distance function named resistance distance.Let G be a connected graph, and the resistance distance between vertices v i and v j , denoted by r ij , is defined as the effective resistance distance between vertices v i and v j in the electrical network obtained by replacing each edge in G with a unit resistance.The resistance distance is a better indicator of the connection between two vertices than the distance.In fact, the resistance distance parameter reflects the intrinsic properties of the graph and has many applications in chemistry [11,12]. One famous parameter called the Kirchhoff index [10], defined as the sum of resistance distances in a simple connected graph, was given by K f (G) = ∑ i<j r ij .In 1993, Klein and Randić [10] proved that r ij ≤ d ij and K f (G) ≤ W(G) with equality if and only if G is a tree.The intrinsic correlation between the Kirchhoff index and the Laplacian eigenvalues of graph G is shown, independently, by Gutman and Mohar [13] and Zhu et al. [14] as where n is the number of vertices of the graph G and 0 As an analogue to the Gutman index, Chen and Zhang [15] presented another graph parameter, the degree-Kirchhoff index K f * (G) = ∑ i<j d i d j r ij .Meanwhile, authors [15] proved that the degree-Kirchhoff index is closely related to the corresponding normalized Laplacian spectrum.Many researchers devote themselves to the study of normalized Laplacian spectrum and the degree-Kirchhoff index of some classes of graphs.One may be referred to those in [16][17][18][19][20][21][22]. As a structured descriptor of chemical molecular graphs, the topological index can reflect some structural characteristics of compounds.Like Kirchhoff index, degree-Kirchhoff index is also a topological index.Unfortunately, it is difficult to compute resistant distance and degree-Kirchhoff index in a graph from their computational complexity.Therefore, it is necessary to find a explicit closed-form formulas for the degree-Kirchhoff index.In fact, the degree-Kirchhoff index is difficult to calculate for general graphs, but it is computable for some graphs with good periodicity and good symmetry.Huang et al. studied the degree-Kirchhoff index of some graphs with a good structure, such as linear polyomino chain [23] and linear hexagonal chain [24].In addition, there are also some studies on the normalized Laplacian spectrum and the degree-Kirchhoff index of phenylene chains [25,26]. The number of spanning trees of a graph (network) is an important quantity to evaluate the reliability of the graph [27].Therefore, studying the number of spanning trees of graphs has a very important theoretical and practical significance. Hexagonal systems are very important in theoretical chemistry because they are natural graphical representations of benzene molecular structures.In recent years, researchers have worked to study the topological index of hexagonal systems [4,28].The linear pentag-onal derivation chain studied in this paper is related to the hexagonal systems.A linear pentagonal chain of length n, denoted by P n , is made up of 2n pentagons, where every two pentagons with two sides can be seen as a hexagon with one vertex and two sides.Then the linear pentagonal derivation chain, denoted by QP n , is the graph obtained by attaching four-membered rings to each hexagon composed of two pentagons of P n , as showed in The explicit closed-form formulas for Kirchhoff index and the number of spanning trees of the linear pentagonal derivation chain QP n have been derived from the Laplacian spectrum [29].Motivated by the above works, we consider the degree-Kirchhoff index and the number of spanning trees of linear pentagonal derivation chain in terms of the normalized Laplacian spectrum.Different from the method in [29], in this paper, we solve the number of spanning trees according to the normalized Laplacian spectrum, which gives a new way for calculating the number of spanning trees of QP n . In this article, according to the decomposition theorem for the normalized Laplacian polynomial of the linear pentagonal derivation chain QP n , the normalized Laplacian spectrum of QP n is determined.Combining with the relationship between the roots and the coefficients of the characteristic polynomials, the explicit closed-form formulas for degree-Kirchhoff index and the number of spanning trees of QP n can be obtained, respectively.Meanwhile, we also get the Gutman index of QP n .For a general graph G, the ratio . However, we are surprised to discovery that for QP n both (based on our obtained results) as n → ∞. Preliminaries In this section, we will give some notations and terminologies and some known results that will be used in our following section. An automorphism of G is a permutation π of V(G), with the property that v i v j is an edge of G if and only if π(v i )π(v j ) is an edge of G. Suppose we mark the vertices of QP n as shown in Figure 1 and denote is an automorphism of QP n .For convenience, we abbreviate L(QP n ) to L. By a suitable arrangement of vertices in QP n , the normalized Laplacian matrix L can be written as the following block matrix where L V ij is the submatrix composed by rows corresponding to vertices in V i and columns corresponding to vertices in V j for i, j = 0, 1, 2. Let be the block matrix so that the blocks have the same dimension as the corresponding blocks in L. Note that From the unitary transformation TLT, we obtain where According to the above analysis process, Huang et al. [24] derived the decomposition theorem of normalized Laplacian characteristic polynomial of G below. Lemma 1 ([24] ). Suppose L, L A and L S are defined as above.Then the normalized Laplacian characteristic polynomial of QP n is as follows where Φ L (x), Φ L A (x) and Φ L S (x) are characteristic polynomials of L, L A and L S , respectively. Lemma 2 ([30] ).Let M 1 , M 2 , M 3 , M 4 be respectively p × p, p × q, q × p, q × q matrices with M 1 and M 4 being invertible.Then Lemma 3. Suppose G is a connected graph of order n with m edges, and The Normalized Laplacian Spectrum of QP n In this part, from Lemma 1, we first derive the normalized Laplacian eigenvalues of linear pentagonal derivation chain QP n .Then we present a complete description of the sum of the normalized Laplacian eigenvalues' reciprocals and the product of the normalized Laplacian eigenvalues which will be used in getting the degree-Kirchhoff index and the number of spanning trees of QP n , respectively. Given an n × n square matrix M, then we will use M[i, j, • • • , k] to denote the submatrix obtained by deleting the i-th, j-th, • • • , k-th rows and corresponding columns of M. In view of (1), L V 00 , L V 01 , L V 12 and L V 11 are given as follows: (see ( 2)), we have , and . It is easy to see that the normalized Laplacian spectrum of QP n consists of eigenvalues of L A and L S from Lemma 1. Now, suppose that the eigenvalues of L A and L S are, respectively, denoted by [1]).Hence, the eigenvalues of L(QP n ) are nonnegative.That is to say, L A and L S are positive semi-definite.And then, it is not difficult to verify that α 0 = 0, α i > 0 (i = 1, 2, . . ., 4n) and β j > 0 (j = 1, 2, . . ., 3n + 1). Degree-Kirchhoff Index and the Number of Spanning Trees of QP n In this section, we first introduce the following lemma which is a direct result of Lemma 3(i).Note that |E(QP n )| = 10n + 1. Lemma 4. Suppose QP n is a linear pentagonal derivation chain with length n.Then we have Proof.According to the relationship between the roots and coefficients of In the subsequent of this part, it suffices to determine a 4n and −a 4n−1 in Equation (3), respectively. Proof.One can see that the number a 4n (= (−1) 4n a 4n ) is the sum of the determinants obtained by deleting the i-th row and corresponding column of L A for i = 1, 2, . . ., 4n + 1 (see also in [32]), that is According to the structure of L A (see details in (2)), deleting the i-th row and corresponding column of L A is equivalent to deleting the i-th row and corresponding column of I n , the i-th row of √ 2L V 01 and the i-th column of √ 2L V 10 .We mark the resulting blocks of L A [i], by I n−1 , B (n−1)×(3n+1) , B T (n−1)×(3n+1) , C (3n+1)×(3n+1) , respectively.Then applying Lemma 2 to the resulting matrix, one has where , and there's only one 1 on the diagonal in the (3i (5) In this case, according to the structure of L A , deleting the i-th row and corresponding column of L A is equal to deleting the (i − n)-th row and corresponding column of Expressing the resulting blocks, respectively, as I n , B n×3n , B T n×3n , C 3n×3n .Then by Lemma 2, we obtain where and the E, F are as follows: . Proof.One can see that −a 4n−1 (= (−1) 4n−1 a 4n−1 ) is the sum of the determinants of the resulting matrix by deleting the i-th row, i-th column and the j-th row, j-th column for some Case 1. 1 ≤ i < j ≤ n.In this case, deleting the i-th and j-th rows and corresponding columns of L A is to deleting the i-th and j-th rows and corresponding columns of I n , the i-th and j-th rows of √ 2L V 01 and the i-th and j-th columns of √ 2L V 10 .Denote the resulting blocks, respectively, as I n−2 , B (n−2)×(3n+1) , B T (n−2)×(3n+1) and C (3n+1)×(3n+1) and apply Lemma 2 to the resulting matrix.Then we have where , and there exists one 1 on the diagonal in the (3i − 1)-th and (3j − 1)-th rows of C − B T B for 1 ≤ i < j ≤ n, respectively .By a direct computing, we have Case 2. n + 1 ≤ i < j ≤ 4n + 1.In this case, deleting the i-th and j-th rows and corresponding columns of L A is to deleting the (i − n)-th and (j − n)-th rows and corresponding columns of L V 11 + L V 12 , the (i − n)-th and (j − n)-th columns of √ 2L V 01 and the (i − n)-th and (j − n)-th rows of √ 2L V 10 .Similarly, denote the resulting blocks, respectively, as C (3n−1)×(3n−1) , B n×(3n−1) , B T n×(3n−1) and I n .Then by Lemma 2 to the resulting matrix, we have where , and the E, F, G are as follows: . By a direct calculation, we have By using a similar method, deleting the i-th and j-th rows and corresponding columns of L A is to deleting the i-th row and i-th column of I n , the (j − n)-th row and (j − n)-th column of L V 11 + L V 12 , the i-th row and (j − n)-th column of √ 2L V 01 and the (j − n)-th row and i-th column of √ 2L V 10 .We denote the resulting blocks, respectively, as I (n−1) , C 3n×3n , B (n−1)×3n and B T (n−1)×3n and apply Lemma 2 to the resulting matrix.Then we get and there is only one 1 on the diagonal in the (3i and there is only one 1 on the diagonal in the (3i where , and there is only one 1 in the (3i − 1)-th row of E, or where Combining with ( 7)- (10), we obtain Finally, substituting Claims 1 and 2 into (3), Lemma 5 holds directly. are the roots of the Φ L S (x) = 0. Applying Vieta's Formulas [31], we get 3n+1 ∑ j=1 1 In order to determine (−1) 3n b 3n and det L S in (11), we consider the k order principal submatrix W k consisting of the first k rows and the first k columns of L S , k = 1, 2, . . ., 3n + 1.Put w k := det W k .Let's prove the following fact first. Proof.By a direct calculation, we obtain that w 1 = 3 2 , w 2 = 4 3 , w 3 = 29 18 , w 4 = 54 27 , w 5 = 295 162 , and w 6 = 536 243 , expanding det W k with regard to its last row, we have According to Theorem 1, we can have the degree-Kirchhoff indices of linear pentagonal derivation chains from QP 1 to QP 40 , as shown in Table 1. Based on Claims 1, 3 and Lemma 3, we can get the same results as the Theorem 3 [29], which further proves that the result of our calculation (Theorem 2) is correct.Theorem 2. Let QP n denote a linear pentagonal derivation chain with length n.Then A Relation between the Gutman Index and Degree-Kirchhoff of QP n At the end of this paper, we calculate the Gutman index and show that the degree-Kirchhoff index of QP n is about half of its Gutman index.Theorem 3. Let QP n denote a linear pentagonal derivation chain with length n.Then Gut(QP n ) = 200n 3 + 181n 2 + 31n + 1. Proof.Let the vertices of QP n be labeled as in Figure 1.Recall that Gut(G) = ∑ i<j d i d j d ij .Therefore, we evaluated d i d j d ij for all vertices, and then we summed them and divided by two.First, compute d i d j d ij for each type of vertices separately and the expression of each type of vertices are as follows: Fixed the vertices 1 or 1 of QP n : Fixed the vertices 2 or 2 of QP n : Fixed the vertices 3l or 3l (1 ≤ l ≤ n) of QP n : Fixed the vertices 3n + 1 or 3n + 1 of QP n : Fixed the vertex 1 • of QP n : Fixed the vertices l • (2 ≤ l ≤ n − 1) of QP n : Fixed the vertex n • of QP n : Figure 1 . Figure 1.The linear pentagonal derivation chain QP n . eigenvalues of L S .Then .Proof.Similarly, for Φ L S Table 1 . The degree-Kirchhoff indices of linear pentagonal derivation chains from QP 1 to QP 40 .
4,363.8
2023-10-01T00:00:00.000
[ "Mathematics" ]
Neuronal Glycoprotein M6a: An Emerging Molecule in Chemical Synapse Formation and Dysfunction The cellular and molecular mechanisms underlying neuropsychiatric and neurodevelopmental disorders show that most of them can be categorized as synaptopathies—or damage of synaptic function and plasticity. Synaptic formation and maintenance are orchestrated by protein complexes that are in turn regulated in space and time during neuronal development allowing synaptic plasticity. However, the exact mechanisms by which these processes are managed remain unknown. Large-scale genomic and proteomic projects led to the discovery of new molecules and their associated variants as disease risk factors. Neuronal glycoprotein M6a, encoded by the GPM6A gene is emerging as one of these molecules. M6a has been involved in neuron development and synapse formation and plasticity, and was also recently proposed as a gene-target in various neuropsychiatric disorders where it could also be used as a biomarker. In this review, we provide an overview of the structure and molecular mechanisms by which glycoprotein M6a participates in synapse formation and maintenance. We also review evidence collected from patients carrying mutations in the GPM6A gene; animal models, and in vitro studies that together emphasize the relevance of M6a, particularly in synapses and in neurological conditions. INTRODUCTION The excitatory synapses are specific neuron-neuron communications between axon and dendritic processes that orchestrate the information stream and storage in the brain (Tu et al., 2018). Synapse formation involves a complex series of events with at least three primary stages: axon elongation and guidance by which axons reach their target area; synaptic specificity governed by an appropriate association of synaptic molecules, and synaptogenesis which creates functional synapses (Shen and Cowan, 2010). Functional synapses are supported by specialized protein complexes, whose function is regulated in time and space during neuronal development, allowing effective synaptic plasticity (Torres et al., 2017). The cellular and molecular mechanisms underlying neuropsychiatric and neurodegenerative disorders reveal that most of them can be classified as synaptopathies (Sengpiel, 2018), however, their full understanding so far remains unknown. Large scale genome-wide association studies (GWAS, reviewed in Claussnitzer et al., 2020) promote the study of additional candidates. Candidates arise either as genetic vulnerability or susceptibility loci allowing researchers to explore how they might be involved in the molecular mechanisms governing neurological diseases with complex etiology and heterogeneous genetic predisposition. Neuronal glycoprotein M6a, encoded by the GPM6A gene, is a member of the tetraspan proteolipid protein (PLP) family together with PLP/DM20 and M6b. Since its discovery in 1992 (Baumrind et al., 1992;Lagenaur et al., 1992), M6a has emerged as one of many proteins involved in neuron development, synapse plasticity, and as a key component in various neuropsychiatric disorders (Michibata et al., 2009;El-Kordi et al., 2013;Gregor et al., 2014;Fuchsova et al., 2015). This review provides a quick overview of the structure and molecular mechanisms by which M6a participates in synapse formation and maintenance. Moreover, we review evidence collected from patients carrying mutations within GPM6A; animal models, and in vitro studies that highlight the relevance of M6a, particularly in synapses and related neurological conditions. GENE, PROTEIN, AND STRUCTURAL DOMAINS The PLP family members are integral membrane proteins with a conserved topology: four transmembrane domains (TMDs), two extracellular loops (EC1 and EC2), one intracellular loop (IC), and the N-and C-termini both at the cytoplasmic face ( Figure 1A). M6a exhibits low sequence identity with both PLP (38%) and M6b (52%), however, the TMDs are highly conserved (Greer and Lees, 2002;Fernandez et al., 2010). Human glycoprotein M6a is encoded by a 369,731 kb gene organized into seven exons and located at chromosome 4q34.2. The full-length gene encodes a 278 amino acid membrane protein with a molecular mass of approximately 32 kDa (Olinsky et al., 1996). The amino acid sequence of M6a is highly conserved within mammals (more than 98% of identity). Post-translational modifications in M6a are summarized in Figure 1A. M6a has seven potential phosphorylation sites, and some of them are responsible for specific features described below. M6a has four cysteine residues within its EC2, critical for its folding and function, particularly C174 and C192 are linked by a disulfide bond, forming an intradomain important for protein-protein interactions (Fuchsova et al., 2009). The EC2 also contains two predicted N-glycosylation sites. Only glycosylation at N164 was experimentally corroborated, although there are no reported functions to date (Fang et al., 2016). Seven other cysteine residues close to the TMDs in the cytoplasmic side are potential palmitoylation sites, three of which (C17/18/21)-conserved within the PLP family-are necessary for M6a inclusion in lipid rafts (Honda et al., 2017;Ito et al., 2018). GPM6A's RNA-expression rapidly increases during development in human and murine brains. M6a is a brainspecific gene with a very high level of expression, representing one of the most abundant palmitoylated proteins in the CNS (Huminiecki et al., 2003;Kang et al., 2008). By contrast, low expression was detected in the lung, spleen, ovary, and the thyroid gland (Fagerberg et al., 2014;Yu et al., 2014;Yue et al., 2014). M6a protein levels are enriched in the hippocampus, cerebellum, striatum, and prefrontal cortex among other brain areas 1 . Regarding specific cell expression at the CNS, M6a is mostly placed at the cell surface of neurons and epithelial cells of the choroid plexus, but not in glial cells. The neuronal expression of M6a gives it a distinctive feature within the PLP family members as PLP is expressed only in glial cells and M6b is expressed in both neurons and glia (Werner et al., 2013). M6a's Role in the Presynaptic Formation The coordination of neuronal differentiation, axonal growth, and guidance involves timely expression of cell surface proteins and extracellular adhesion molecules escorted by structural changes in the cellular cytoskeleton (Caceres et al., 2012). During neurite outgrowth, plasma membrane proteins are directed toward neurites first, and then they are concentrated in growth cones (GCs). There, proteins will be available to respond to orientation signals and to signal the path to a specific destination (Fuentes and Arregui, 2009;Ulloa et al., 2018). Indeed, M6a was identified as an "edge-membrane antigen" (EMA) because it was found concentrated at the edge of neuronal GCs and their lamellipodia in cultured neurons from the cerebellum, cortex, and hippocampus (Baumrind et al., 1992;Lagenaur et al., 1992). Later on, M6a was found to be critical for neurite growth in a wide variety of in vitro models, from brain tissue explants to neuronal cell lines and from human to Xenopus (Lagenaur et al., 1992;Mukobata et al., 2002;Alfonso et al., 2005;Zhao et al., 2008;Michibata et al., 2009;Formoso et al., 2015a). Noteworthy, Sato et al. (2011a) observed a reduction of the axon projections in the olfactory bulb from embryonic brains at E14.5 of Gpm6a knockout mice. Besides, double knockout mice for Gpm6a and Gpm6b show decreased axon elongation and a thinner corpus callosum which could be rescued by forced expression of M6 proteins (Mita et al., 2015). Presynaptic boutons are terminal specializations of the axon, which contain synaptic vesicles (SVs) filled with neurotransmitters. This specific compartmentalization involves the coordination of both SVs and presynaptic active zone proteins, which define regions in the membrane for the release of neurotransmitters to the synaptic cleft (Yue et al., 2014). Thus, there are three main types of proteins (i) residents of SV's membrane, such as synaptophysin (Egbujo et al., 2016), (ii) filament or adaptor proteins of the cytomatrix, such as piccolo and bassoon (Hida and Ohtsuka, 2010), and (iii) those that participate in synaptic vesicle exocytosis such as synaptosomal associated protein 25 (SNAP25) (Antonucci et al., 2016; Figure 1E). Roussel et al. (1998) revealed that M6a was distributed at the presynaptic membrane, in particular, on the membrane of SVs which was confirmed by proteomic analysis (Takamori et al., 2006;Taoufiq et al., 2020). Besides, M6a was also found at glutamatergic nerve terminals in the cerebellum and cerebral cortex but not in GABAergic neurons of adult brain mice. This specific-excitatory preference was also confirmed in the hippocampal formation in which M6a colocalizes with the vesicular glutamate transporter VGLUT (Cooper et al., 2008). Recently, M6a has also been associated with other SVs and presynaptic active zone-PAZ-residents proteins, like synaptic vesicle protein 2B (SV2B), piccolo, bassoon, and synapsin 1 (Aparicio et al., 2020). Overexpression of M6a in hippocampal neurons showed a significant increment of synaptophysin puncta correlating with an increase in the number of synapses. On the contrary, neurons subjected to siRNA depletion of M6a and neurons overexpressing a truncated form of M6a at the EC2 loop exhibited a decreased number of synaptophysin puncta (Alfonso et al., 2005;Fuchsova et al., 2009;Formoso et al., 2016). Moreover, M6a internalizes and recycles back to the cell membrane via clathrin-mediated endocytosis, and localizes in Rab5, Rab7, and Rab11 positive endosomes, just like the case of SVs recycling pathways (Wu et al., 2007;Formoso et al., 2016;Garcia et al., 2017;Rosas et al., 2018). Indeed, inducing M6a acute internalization correlated with a decrease of both the number of synaptophysin puncta and synapses (Garcia et al., 2017), suggesting that M6a might be playing an active role in SV/neurotransmitter release. M6a's Role in the Postsynaptic Formation Straight opposed to the presynaptic terminal is the postsynaptic target. Dendritic spines, the main postsynaptic compartment, are protrusions from the dendrite shaft that receive the information from axonal terminals through different neurotransmitter receptors. Dendritic spines can show dynamic changes in number, size, shape, and movement which allow synaptic rearrangements to take place (Tonnesen and Nagerl, 2016). This plastic feature is a critical substrate for functional plasticity during pruning or learning and memory, and also for synaptic dysfunction as observed in neurodegenerative and neuropsychiatric conditions including Alzheimer's Disease (AD), autism spectral disorders (ASD), schizophrenia, and depression (Penzes et al., 2011;Bian et al., 2015;Ozcan, 2017). According to its size and shape, dendritic spines can be classified as thin, stubby, mushroom, and cup-shaped. Moreover, according to their functionality, dendritic protussions can also be classified as immature (filopodia, thin and stubby) or mature (mushroom and cup-shaped) ( Figure 1D). Three possible models have been proposed to explain how a synapse is formed: (i) Sotelo's model describes that a synapse could arise when an immature spine is contacted by the axon terminal inducing its development toward the mushroom type (Sotelo et al., 1975), (ii) Miller/Peters's model proposes that a presynaptic terminal directly contacts with the dendrite shaft, inducing spine outgrowth (Miller and Peters, 1981), and (iii) Filopodial model: where dendritic filopodia may actively initiate synaptogenic contacts by contacting a presynaptic terminal, thereby inducing its stabilization and subsequently maturation to the mushroom type (Yuste and Bonhoeffer, 2004;Ziv and Fisher-Lavie, 2014). M6a has been widely described to be involved in filopodia/spine formation in different cell culture models (Alfonso et al., 2005;Sato et al., 2011b;Scorticati et al., 2011;Formoso et al., 2015b;Alvarez Julia et al., 2016;Formoso et al., 2016;Rosas et al., 2018). Two pathways have been implicated by which M6a increases filopodia density. M6a overexpression induces the activation by phosphorylation of the intracellular cascade involving the Src and MAPK/ERK pathway; and the localization of M6a within lipid rafts is compatible with this (Scorticati et al., 2011). Likewise, signaling pathways that include Rac1 and Pak1 activation through coronin1A facilitate M6ainduced filopodium formation (Alvarez Julia et al., 2016). There are key domains required for M6a to induce filopodia/spine formation ( Figure 1D): (i) TMDs homotypic-interactions, especially commanded by particular glycine residues at TMD2 and TMD4 (Formoso et al., 2015b(Formoso et al., , 2016, (ii) a disulfide bridge at EC2 loop formed between cysteine residues C174 and C192 (Fuchsova et al., 2009), and (iii) C-terminal domain residues K250/K255/E258 (Rosas et al., 2018). Although the N-terminus is not involved in filopodia induction (Mita et al., 2015;Rosas et al., 2018), phosphorylation of certain serine/threonine residues at N-and C-termini, T10/S256/S267/T268, proved to be necessary for filopodial motility . Although M6a induces dendritic protrusions plasticity, whether or not it is placed at the postsynaptic membrane is undetermined. For instance, M6a was identified at presynaptic membranes and enriched in glutamatergic synaptic vesicles docked to the presynaptic active zone (Roussel et al., 1998;Boyken et al., 2013). Conversely, M6a was detected in a proteomic analysis of enriched postsynaptic membrane fractions from mice brains, suggesting a postsynaptic localization for M6a (Reim et al., 2017). Besides, we described that M6a co-immunoprecipitated with integral components of the postsynaptic membranes such as metabotropic glutamate receptor 1 (GRM1), voltage-dependent anion channel 1 (VDAC1), and N-methyl-D-aspartate receptor type 1, NMDA-R1 (GR1A1) (Aparicio et al., 2020). Indeed, M6aoverexpressing neurons exhibited an increase in the number of NMDA-R1 clusters, whilst truncated forms of M6a or its depletion decreased the number of NMDA-R1 clusters (Formoso et al., 2016;Garcia et al., 2017). The colocalization of M6a and NMDA-R1 suggests that M6a acts as a scaffold protein assembling proteins and lipids to form a signaling platform on the neuronal surface (Wu et al., 2007;Scorticati et al., 2011). However, to determine whether M6a is indeed located at the postsynaptic membrane techniques such as cryo-electron microscopy and super-resolution microscopy are needed to avoid contaminations of biochemical-based techniques (Liu et al., 2019;Nosov et al., 2020). M6a'S POTENTIAL ROLE IN NEURON-GLIA INTERACTION M6a is enriched within cerebellar parallel and hippocampal mossy fibers, whose axons remain unmyelinated also in adulthood (Cooper et al., 2008). Nonetheless, new data suggest that M6a could participate in neuron-glia interactions. This subject exceeds the topics covered in this review, and no functional experiments were reported yet, but we would like to highlight a few insights. Pourhaghighi et al. (2020) identified associated protein complexes in adult mammal brains and built BraInMap 2 . M6a was found within a complex of 30 interacting proteins, some of which are myelin sheath proteins, including the main myelin glycoproteins PLP and MAG; or are myelin sheath associated proteins like contactin 1 and contactin associated protein 1. In agreement, we identified 20 myelin proteins in the co-immunoprecipitation complexes formed by M6a's extracellular domains and rat hippocampal samples, in which PLP was experimentally confirmed (Aparicio et al., 2020). Also, Jahn et al. (2020) identified M6a in the myelin sheath of post mortem human brains. M6a IN SYNAPSE FUNCTION AND DYSFUNCTION The evidence reviewed so far highlights an important role of M6a in neuronal development and synaptic plasticity. However, 2 www.bu.edu/dbin/cnsb/mousebrain few reports interrogate about the role of M6a on active synapses. All of them are based on determining the cluster number of synaptophysin and/or the number of clusters between synaptophysin and NMDA-R1 on in vitro models (Alfonso et al., 2005;Fuchsova et al., 2009;Formoso et al., 2016;Garcia et al., 2017). For instance, endogenous M6a acute depletion using siRNA or treatments with M6a-mAb neutralizing antibodies dramatically decrease the number of synaptophysin clusters, or synaptophysin and NMDA-R1 colocalization clusters (Alfonso et al., 2005;Garcia et al., 2017). The latter suggests that M6a is not only required for the formation of synapses but also their maintenance. Evidence from M6a's relevance in vivo comes from the observation that several GPM6A variants and inadequate expression levels were associated with several neuropsychiatric disorders (see Table 1), thus highlighting that the impairment of M6a function contributes to disease onset or increases the risk of susceptibility. Chronic stress induces behavioral changes in mammals, which can be evidenced by mood disorders including anxiety, claustrophobia, and depression. In this context mRNA levels of Gpm6a were decreased in the hippocampus of chronically stressed animals (Alfonso et al., 2002(Alfonso et al., , 2004a; and then rescued with the administration of antidepressants (Alfonso et al., 2006). These are consistent with findings where chronically stress animals presented a reduction of the dendritic arborization and structural changes in the mossy fiber terminals (Cooper et al., 2008). In humans, GPM6A mRNA levels were also decreased in the hippocampus of depressed patients who committed suicide (Fuchsova et al., 2015). Furthermore, depressed patients treated with serotonin reuptake inhibitors showed a reduction of M6a levels in saliva compared to depressed patients under benzodiazepine treatment or no treatment at all. As a result, M6a was proposed as a mood disorder biomarker (Monteleone et al., 2020). Gpm6a knockout mice lack obvious behavioral abnormalities but when subjected to moderate restrain stress, they presented a claustrophobia-like phenotype. These findings encouraged El-Kordi and collaborators to investigate GPM6A variants within claustrophobic patients. In their study they found a 3 -untranslated region variant that produced a functional mRNA that could not be silenced by neuronal miR124, thereby losing the physiological regulation of the gene (El-Kordi et al., 2013). Gregor et al. (2014) identified a de novo duplication of the GPM6A gene in a patient with learning disabilities and behavioral anomalies. Furthermore, Rao-Ruiz et al. (2015) revealed that in mice subjected to a contextual fear-memory learning task M6a was differentially up-regulated 4 h later. Until now, there was no direct relationship between M6a levels and the molecular processes underlying memory and learning formation. However, these independent findings suggest that this may warrant further investigation. The intrinsic genetic program responsible for synapse formation and maintenance involves a correct pruning of the dendritic spines. An abnormal dendritic spine pruning during development and in adulthood contributes to the generation of synaptopathies such as ASD, schizophrenia, or AD (Penzes et al., 2011). For instance, subjects with ASD have deficits in social interactions, disruption of oral communication, and present repetitive behavior; correlated with an elevated number of dendritic spines and a late onset of the pruning process (Toro et al., 2010). Interestingly, a proteomic analysis done by Abraham et al. (2019) revealed that M6a levels were increased in samples from the cerebellum of patients with ASD. Moreover, proteomic studies showing altered M6a levels in both animal models and patients with AD contribute to a possible role in memory decline (Xu et al., 2006;Lachén-Montes et al., 2016;Chiasserini et al., 2020;Muraoka et al., 2020). Also, M6a is downregulated in the hippocampal formation whereas it is enriched in extracellular vesicles from cerebrospinal fluid (CSF) samples of AD patients suggesting an active secretion of M6a during AD progression (Xu et al., 2006;Chiasserini et al., 2020). Indeed, Muraoka et al. (2020) proposed M6a, together with ANX5, VGF, and ACTZ as a biomarker for monitoring AD progression from CSF samples (Muraoka et al., 2020). One of the goals of genome-wide association studies is to link specific genetic variants with specific phenotypes. This information can then be used to evaluate the risk or susceptibility of a population to develop a certain disease, in diagnostic tests or to guide treatments. Indeed, single nucleotide polymorphisms (SNPs) within non-coding region of GPM6A have been linked to schizophrenia (Boks et al., 2008; Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014; Li et al., 2017;Pardiñas et al., 2018;Lam et al., 2019), and one variant for both neuroticism (Nagel et al., 2018) and bipolar disorder (Greenwood et al., 2012). However, there are still no reports confirming whether any of these SNPs lead to changes in M6a gene expression or function. Our lab studied 3 non-synonymous SNPs within GPM6A's TMD coding region reported in the dbSNP database 3 . By doing reverse genetic experiments, we demonstrated that all nsSNPs prevented M6a from being functional in neurons, impaired formation of dendritic spines and synapses owing to decreased stability, dimerization, or improper folding of the protein (Formoso et al., 2015b(Formoso et al., , 2016. CONCLUDING REMARKS Evidence collected over 30 years since its discovery show that glycoprotein M6a has a critical role in synapse formation, plasticity, and maintenance. Previous research so far has focused on in vitro approaches, with only a few articles studying M6a-deficient mice. Unfortunately, in none of those studies the synaptic activity or synaptic integrity were interrogated. New data coming from "omic" and GWAS approaches in combination with basic investigation will expand our knowledge of the field and define the exact role of GPM6A in neuronal development and synaptopathies. This in turn will offer new routes to improve diagnosis and develop more effective treatments. AUTHOR CONTRIBUTIONS AL, GA, and CS were involved in bibliography revision. CS contributed to the design and conceptualization of the research topic. CS, AL, and GA wrote the manuscript. AL and GA made the figure and the table. All authors contributed to the article and approved the submitted version.
4,636.8
2021-05-04T00:00:00.000
[ "Biology", "Medicine" ]
Search for dark matter particles produced in association with a Higgs boson in proton-proton collisions at $\sqrt{s} =$ 13 TeV A search for dark matter (DM) particles is performed using events with a Higgs boson candidate and large missing transverse momentum. The analysis is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC in 2016, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. The search is performed in five Higgs boson decay channels: h $\to \mathrm{b\bar{b}}$, $\gamma\gamma$, $\tau^{+}\tau^{-}$, W$^{+}$W$^{-}$, and ZZ. The results from the individual channels are combined to maximize the sensitivity of the analysis. No significant excess over the expected standard model background is observed in any of the five channels or in their combination. Limits are set on DM production in the context of two simplified models. The results are also interpreted in terms of a spin-independent DM-nucleon scattering cross section and compared to those from direct-detection DM experiments. This is the first search for DM particles produced in association with a Higgs boson decaying to a pair of W or Z bosons, and the first statistical combination based on five Higgs boson decay channels. Introduction A host of astrophysical and cosmological observations confirm [1][2][3][4] that dark matter (DM) exists and makes up 26.4% of the total energy density of the universe [5]. However, all of the existing evidence for DM is based only on its gravitational interaction. Whether DM interacts with standard model (SM) particles in any other way remains an open question. There are a number of beyond-the-SM theories suggesting a particle nature of DM [6]. Several types of particle candidates for DM are proposed in these models, all compatible with the observed relic density of DM in the universe [7]. A favored hypothesis is that the bulk of DM is in the form of stable, electrically neutral, weakly interacting massive particles (WIMPs) [8], with masses in a range between a few GeV and a few TeV, thus opening the possibility of DM production at high-energy colliders [9]. Traditionally, searches for DM at colliders involve a pair of WIMPs that recoil against a visible SM particle or a set of SM particles. Because of the lack of electric charge and the small interaction cross section, WIMPs do not leave a directly detectable signal, but in a hadron collider experiment their presence can be inferred via an imbalance in the total momentum in the plane transverse to the colliding beams ( p miss T ), as reconstructed in the detector. This scenario gives rise to a potential signature where a set of SM particles, X, are produced recoiling against the DM particles, represented by the p miss T (the "mono-X" signature). Recent searches at the CERN LHC considered X to be a hadronic jet [10,11], heavy-flavor quarks (bottom and top) [12, 13], a photon [14, 15], or a W or Z boson [11,[16][17][18]. The discovery of an SM-like Higgs boson [19][20][21] extended the possibility of probing DM at colliders, complementing other mono-X searches. In this paper we designate the state observed at 125 GeV by the symbol h, since in the context of the theoretical models considered below, it does not correspond to the SM Higgs boson. Here, we present a search for the pair production of DM particles in association with a Higgs boson resulting in the final state h + p miss T [22,23], referred to as the "mono-Higgs". While in a typical mono-X search, the X particle is emitted as initial-state radiation, this process is strongly suppressed for the case of the Higgs boson because of the smallness of both the Higgs boson Yukawa couplings to light quarks and its loop-suppressed coupling to gluons. Thus, the mono-Higgs production can be either a result of final-state radiation of DM particles, or of a beyond-the-SM interaction of DM particles with the Higgs boson, typically via a mediator particle. A number of searches have been carried out by the ATLAS and CMS Collaborations looking for the mono-Higgs signature in several Higgs boson decay channels, at center-of-mass energies of 8 and 13 TeV [24][25][26][27][28][29][30][31][32]. So far, none of these searches has observed a significant excess of events over the SM expectations. In this paper, we describe the first search for mono-Higgs production in the W + W − and ZZ Higgs boson decay channels, as well as the combination of these searches with the previously published results in the bb [30,31], γγ [32], and τ + τ − [32] channels. (Hereafter, for simplicity we refer to bb, τ + τ − and W + W − as bb, ττ and WW, respectively.) All the analyses are based on a data sample of proton-proton (pp) collisions at √ s = 13 TeV collected in 2016 and corresponding to an integrated luminosity of 35.9 fb −1 . Two simplified models of DM production recommended by the ATLAS-CMS Dark Matter Forum [33] are investigated. Figure 1 shows representative tree-level Feynman diagrams corresponding to these two models. The diagram on the left describes a type-II two Higgs doublet model (2HDM) [34,35] further extended by a U(1) Z group and referred to as the Z -2HDM [36]. In this model, the Z boson is produced via a quark-antiquark interaction and then decays into a Higgs boson and a pseudoscalar mediator A, which in turn can decay to a pair of Dirac fermion DM particles χ. The diagram on the right shows the production mechanism in the baryonic Z model [22], where Z is a vector boson corresponding to a new baryon number U(1) B symmetry. The Z boson acts as a DM mediator and can radiate a Higgs boson before decaying to a pair of DM particles. A baryonic Higgs boson h b is introduced to spontaneously break the new symmetry and to generate the Z boson mass via a coupling that is dependent on the h b vacuum expectation value. The Z boson couplings to quarks and the DM particles are proportional to the U(1) B gauge couplings. A mixing between the h b and h states allows the Z boson to radiate h, resulting in a mono-Higgs signature. In the Z -2HDM, the predicted DM production cross section depends on number of parameters. However, if the mediator A is produced on-shell, the kinematic distributions of the final-state particles depend only on the Z and A boson masses, m Z and m A . In this paper, a scan in m Z between 450 and 4000 GeV and in m A between 300 and 1000 GeV is performed. The values of m A below 300 GeV have been already excluded by the existing constraints on flavor changing neutral currents in the b → sγ transitions [34], and hence are not considered in the analysis. The masses of the 2HDM heavy Higgs boson and the charged Higgs boson are both fixed to the m A mass. The ratio of the vacuum expectation values of the two Higgs doublets, tan β, is varied from 0.4 to 10. The DM particle mass is fixed to 100 GeV, the A-DM coupling strength g χ is fixed to 1, and the Z coupling strength to quarks g Z is fixed to 0.8. The branching fraction of the decay of A to DM particles B(A → χχ) decreases as the mass of the DM candidate (m χ ) increases, for the range of m A considered in this analysis. However, since the relative decrease in B(A → χχ) is less than 7% as m χ increases from 1 to 100 GeV, the results shown in this paper for m χ = 100 GeV are also applicable to lighter DM particles. The results are expressed in terms of the product of the signal production cross section and branching fraction B(A → χχ), where B(A → χχ) is ≈100% for m A = 300 GeV and decreases for m A greater than twice the mass of the top quark, where the competing decay A → tt becomes kinematically accessible. The contribution to the mono-Higgs signal from another process possible in the model, Z → Z(→ νν ) + h, is not considered in this analysis. Further details on the choice of the model parameters are given in Refs. [27,37]. We note that for the chosen set of parameters, the values of m Z within our sensitivity reach have been recently excluded by the ATLAS and CMS searches for dijet resonances at √ s = 13 TeV [38][39][40][41]. Nevertheless, we keep this benchmark, specifically developed for the LHC Run-2 searches [33], to allow a direct comparison with the results of other mono-Higgs searches. Given that the kinematic distributions of the final states depend only very weakly on the value of the g Z coupling, our results can be reinterpreted for lower g Z values, where the interplay between the mono-Higgs and the dijet analysis sensitivities changes. For the baryonic Z model, m Z between 100 and 2500 GeV and m χ between 1 and 700 GeV are used for this study. The Z -DM coupling is fixed to g χ = 1 and the Z -quark coupling is fixed to g q = 0.25. The mixing angle between the baryonic Higgs boson and the SM-like Higgs boson is set to sin θ = 0.3, and the coupling between the Z boson and h is assumed to be proportional to m Z . The branching fractions of the Higgs boson decays are altered for m Z m h /2, because the decay h → Z Z ( * ) becomes kinematically accessible. Therefore the region m Z < 100 GeV, for which the modification of the h branching fractions is sizable, is not considered in the analysis. For both benchmark models, h is assumed to have a mass of 125 GeV. A considerable amount of p miss T is expected, as shown in Fig. 2. The reason that the p miss T spectrum is harder for the Z -2HDM is that the DM particles are produced via a resonant mechanism in this case, whereas for the baryonic Z model they are not. The difference in shape becomes more marked as m Z increases. In Fig. 2 (right) it can be seen that the shape of the p miss T distribution is almost independent of m χ in the baryonic Z model, and depends most strongly on m Z . Although the signal sensitivity in the h → bb channel is higher than in the other final states considered (γγ, ττ, WW, and ZZ) because of the channel's large branching fraction and manageable background in the large-p miss T region, the statistical combination of all five decay modes is performed to improve the overall sensitivity. The h → γγ and h → ZZ channels exhibit better resolution in the reconstructed Higgs boson invariant mass, while the h → ττ, h → WW, and h → ZZ channels benefit from lower SM backgrounds, which results in a higher sensitivity for signals with a soft p miss T spectrum. In the h → bb channel analysis, the h is reconstructed from two overlapping b jets. Thus different approaches are used for the two models, because of the difference in the average Lorentz boost of the Higgs boson, which is higher in the Z -2HDM than in the baryonic Z model. The Higgs boson is reconstructed using a jet clustering algorithm with a distance parameter of 0.8 for the Z -2HDM and 1.5 for the baryonic Z model. For the baryonic Z model, a simultaneous fit of the distribution of the recoil variable in the signal region (SR) and the control regions (CRs) is performed to extract the signal. For the Z -2HDM, a parametric fit of the Z boson transverse mass is used to estimate the major backgrounds and to extract the signal. The search in the h → γγ channel [32] uses a fit to the diphoton invariant mass distribution to extract the signal. This analysis is performed in two categories distinguished by the p miss T value, high ( >130 GeV) and low , in order to be sensitive to a large variety of possible signals. The search in the h → ττ channel [32] is based on the combination of the events for the three τ lepton decay modes with the highest branching fractions: τ h τ h , µτ h , and eτ h , where τ h denotes a hadronically decaying τ lepton. After requiring a p miss T (>105 GeV) in order to suppress the background sufficiently, the signal is extracted by performing a simultaneous fit in the SR and in the CRs to the transverse mass of the Higgs boson reconstructed from the two τ leptons. In the h → WW channel search, the fully leptonic decays of the two W bosons are considered, requiring one lepton to be an electron and the other to be a muon, in order to reduce the contamination from the Z → e + e − and Z → µ + µ − backgrounds. The h → ZZ search is performed in the fully leptonic decay channel of the Z boson pair: h → ZZ → 4 . The analysis strategy follows closely the measurement of the Higgs boson properties in the same channel [42]. The paper is organized as follows. After a brief introduction of the CMS detector in Section 2, the data and simulated event samples are described in Section 3. The event reconstruction and the analysis strategy for each Higgs boson decay mode used in the statistical combination are detailed in Sections 4 and 5, respectively. The combination procedure and the main systematic uncertainties are described in Sections 6 and 7, respectively. The results are presented in Section 8, and the paper is summarized in Section 9. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters, made of steel and quartz fibres, extend the pseudorapidity (η) coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. Events of interest are selected using a two-tiered trigger system [43]. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz in a time of less than 4 µs. The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [44]. The pp collision data were collected at √ s = 13 TeV in 2016. The time spacing between adjacent bunches of 25 ns leads to an average number of pp interactions per bunch crossing of 23 assuming the pp inelastic cross section of 69.2 mb [45]. The integrated luminosity of the data sample used in all the analyses described in this paper corresponds to 35.9 fb −1 , after imposing data quality requirements. Signal and background simulation Signal samples for the five Higgs boson decay modes are generated at leading order (LO) in perturbative quantum chromodynamics (QCD) using the MADGRAPH5 aMC@NLO v2.3.0 generator [46,47], for both the Z -2HDM and baryonic Z model [33]. The Higgs boson is treated as a stable particle during the generation, and its decays are described subsequently using PYTHIA 8.212 [48]. A detailed description of the simulated samples used for the h → bb, h → γγ, and h → ττ analyses can be found in Refs. [30][31][32]. The production of a Higgs boson in association with a Z boson decaying to a pair of neutrinos is an irreducible background for all the final states considered. Other Higgs boson backgrounds originating from gluon-gluon fusion (ggF) and vector boson fusion (VBF) production modes are small. These backgrounds are simulated at next-to-LO (NLO) in QCD with POWHEG v2 [49][50][51]. The main nonresonant backgrounds in the h → WW analysis are from the continuum WW, single top quark, and top quark pair production. The continuum WW production is simulated in different ways: POWHEG [52] is used to generate qq → WW events at NLO precision, whereas gg → WW events are generated at LO using MCFM v7.0 [53][54][55]. The simulated qq → WW events are reweighted to reproduce the p WW T distribution from the p T -resummed calculation at next-to-NLO (NNLO) plus next-to-next-to-leading logarithmic precision [56,57]. The LO gg → WW cross section, obtained directly from MCFM, is further corrected to NNLO precision via a K factor of 1.4 [58]. Single top quark, tt, WZ, and Wγ * backgrounds are generated at NLO with POWHEG. Drell-Yan (DY) production of Z/γ * is generated at NLO using MADGRAPH5 aMC@NLO, and the p T spectrum of the dilepton pairs is reweighted to match the distribution observed in dimuon events in data. Other multiboson processes, such as Wγ, ZZ, and VVV (V = W or Z), are generated at NLO with MADGRAPH5 aMC@NLO. All samples are normalized to the latest available theoretical cross sections, NLO or higher [53,54,59]. In the h → ZZ analysis, the SM production mechanism constitutes a major background because this has the same experimental signature and satisfies the low p miss T threshold used in the analysis. It is simulated with POWHEG [49,50,60] in four main production modes: ggF, including quark mass effects [61]; VBF [62]; associated production with a top quark pair (tth) [63]; and associated production with a vector boson (Wh, Zh), using the MINLO HVJ [64] extension of POWHEG. In all cases, the Higgs boson is forced to decay via the h → ZZ → 4 ( = e, µ, or τ) channel. The description of the decay of the Higgs boson to four leptons is obtained using the JHUGEN 7.0.2 generator [65,66]. In the case of Zh and tth production, the Higgs boson is allowed to decay as h → ZZ → 2 + X, such that four-lepton events where two leptons originate from the decay of the associated Z boson or top quarks are also taken into account in the simulation. The cross sections for the processes involving SM Higgs boson production are taken from Ref. [67]. All processes are generated using the NNPDF3.0 [68] parton distribution functions (PDFs), with the precision matching the parton-level generator precision. The PYTHIA generator with the underlying event tune CUETP8M1 [69] is used to describe parton showering and fragmentation. The detector response is simulated using a detailed description of the CMS apparatus, based on the GEANT4 package [70]. Additional simulated pp minimum bias interactions in the same or adjacent bunch crossings (pileup) are added to the hard scattering event, with the multiplicity distribution adjusted to match that observed in data. Event reconstruction The particle-flow (PF) algorithm [71] aims to reconstruct and identify each individual particle in an event, with an optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained from the ECAL measurement. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track [72]. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies. Electron candidates are required to have |η| < 2.5. Additional requirements are applied to reject electrons originating from photon conversions in the tracker material or jets misreconstructed as electrons. Electron identification criteria rely on observables sensitive to the bremsstrahlung along the electron trajectory and on the geometrical and momentum-energy matching between the electron track and the associated energy cluster in the ECAL, as well as on the ECAL shower shape observables and association with the primary vertex. Muon candidates are reconstructed within |η| < 2.4 by combining information from the silicon tracker and the muon system. Identification criteria based on the number of measurements in the tracker and in the muon system, the fit quality of the muon track, and its consistency with its origin from the primary vertex are imposed on the muon candidates to reduce the misidentification rate. For each event, hadronic jets are clustered from PF candidates using the infrared-and collinearsafe anti-k T algorithm [73,74], with a distance parameter of 0.4 (AK4 jets) or 0.8 (AK8 jets). Jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be, on average, within 5 to 10% of the true momentum over the entire p T spectrum and detector acceptance. Pileup interactions can result in additional spurious contributions to the jet momentum measurement from tracks and calorimetric energy depositions. To mitigate this effect, tracks identified to be originating from pileup vertices are discarded and a correction based on the jet area [75] is applied to account for the neutral pileup particle contributions. Jet energy corrections are derived from simulation to bring the measured response of jets to that of particle-level jets on average. In situ measurements of the momentum balance in dijet, photon+jet, Z+jet, and multijet events are used to account for any residual differences in the jet energy scale (JES) between data and simulation [76]. The jet energy resolution (JER) amounts typically to 15% at p T = 10 GeV, 8% at 100 GeV, and 4% at 1 TeV. Additional selection criteria are applied to remove jets potentially dominated by anomalous contributions from various subdetector components or reconstruction failures [77]. At large Lorentz boosts, the two b quarks from the Higgs boson decay may produce jets that overlap and make their individual reconstruction difficult. In this case, either the AK8 jets or larger-area jets clustered from PF candidates using the Cambridge-Aachen algorithm [78,79] with a distance parameter of 1.5 (CA15 jets) are used. To reduce the impact of particles arising from pileup interactions when reconstructing AK8 or CA15 jets, the four-vector of each PF candidate matched to the jet is scaled with a weight calculated with the pileup-per-particle identification algorithm [80] prior to the clustering. The CA15 jets are also required to be central (|η| < 2.4). The "soft-drop" jet grooming algorithm [81] is applied to remove soft, large-angle radiation from the jets. The mass of a groomed AK8 or CA15 jet are referred to as m SD . To identify jets originating from b quark fragmentation (b jets), two b tagging algorithms are used. The combined secondary vertex (CSVv2) [82] and the combined multivariate analysis (cMVAv2) algorithms [82] are used to identify AK4 jets originating from b quarks by their characteristic displaced vertices. For the AK8 jets, subjets inside the jet are required to be tagged as b jets using the CSVv2 algorithm. A likelihood for the CA15 jet to contain two b quarks is derived by combining the information from the primary and secondary vertices and tracks in a multivariate discriminant optimized to distinguish CA15 jets originating from the h → bb decay from those produced by energetic light-flavor quarks or gluons [31]. Hadronically decaying τ leptons are reconstructed from jets using the hadrons-plus-strips algorithm [83]. This algorithm uses combinations of reconstructed charged hadrons and energy deposits in the ECAL to identify the three most common hadronic τ lepton decay modes: 1prong, 1-prong+π 0 (s), and 3-prong. The τ h candidates are further required to satisfy the isolation criteria with an efficiency of 65 (50)% and a misidentification probability of 0. The p miss T is reconstructed as the negative vectorial sum of all PF particle candidate momenta projected on the plane transverse to the beams. Since the presence of pileup induces a degradation of the p miss T measurement (p miss T resolution varies almost linearly from 15 to 30% as the number of vertices increases from 5 to 30 [84]), affecting mostly backgrounds with no genuine p miss T , an alternative definition of p miss T that is constructed only using the charged PF candidates ("tracker p miss T ") is used in the h → WW analysis. In the rest of the paper, p miss T corresponds to the PF p miss T , unless specified otherwise. Analysis strategy In this section we briefly discuss the analysis strategies in the previously published [30-32] h → bb, h → γγ, and h → ττ, channels, and provide full descriptions of the new analyses in the h → WW and h → ZZ decay channels. The summary of all the decay channels contributing to the combination is presented in Table 1. T channel The events used in this final state are selected using a triggers that require large amount (> 90 or > 120 GeV) of p miss T , or H miss T defined as the magnitude of the vectorial sum of the transverse momenta of all jets with p T > 20 GeV in an event. The trigger selection is 96 (100)% efficient for events that subsequently have p miss T > 200 (350) GeV in the off-line reconstruction. As can be seen in Fig. 2, the Lorentz boosts of the Higgs boson are different for the Z -2HDM and baryonic Z model. The events with large boost in the Z -2HDM are reconstructed using a large-radius AK8 jet with p T > 200 GeV and |η| < 2.4. In addition, the h → bb topology is selected by requiring at least one subjet of the AK8 jet to be b tagged. The analysis considers separately two categories, distinguished by the number of b tagged subjets in the event, one or two, the latter being the high-purity category with higher sensitivity. For events with lower boost in the baryonic Z model, Higgs boson candidates are reconstructed using CA15 jets. To select the h → bb candidates using the AK8 jet, one or both subjets are required to pass the loose b tagging criteria, which has an efficiency of 85%, and a misidentification rate of about 10% for jets originating from light-flavor quarks or gluons. In the case of the CA15 jets, a multivariate double b tagging algorithm [82] is used to discriminate the signal from the background of light-flavor jets [31], with an efficiency of 50% and a misidentification rate of Table 1: Summary of the individual channels entering the combination. Analyses are categorized based on the model, p miss T selection, and subsequent decay products listed here. The categorization is the same for both the Z -2HDM and the Baryonic Z model for all decay channels except, as indicated, h → bb. A dash ("-") in the last column implies that the analysis is presented in this paper. Decay channel Final state or category Reference 10%. The AK8 (CA15) analysis requires the Higgs boson candidate mass to be in the 105-135 (100-150) GeV range to reduce nonresonant backgrounds. The difference in the two mass window requirements is primarily driven by the differences in the performance of the two algorithms and in the jet mass resolutions. For both analyses, the mass window was chosen to maximize the signal sensitivity. In order to further reduce the background contributions from W + jets and tt production, events with an electron, muon, photon (p T > 10 GeV), or τ h (p T > 18 GeV) candidates passing loose identification and isolation criteria are vetoed. Furthermore, in the AK8 analysis, the number of additional b tagged AK4 jets with p T > 20 GeV is required to be zero, while in the CA15 analysis, the number of AK4 jets with p T > 30 GeV, well-separated from the CA15 jet in the event, is required to be at most one. The sensitivity of the analyses is further enhanced by using jet substructure variables. The full details of the event selection for the AK8 and CA15 jet analyses can be found in Refs. T channel Signal candidate events in the h → γγ analysis are selected using a diphoton trigger with asymmetric p T thresholds of 30 and 18 GeV on the leading and subleading photons, respectively, and loose identification and isolation requirements imposed on both photon candidates. The diphoton invariant mass is further required to exceed 90 GeV. Slightly higher thresholds of 30 (20) GeV on the leading (subleading) photon p T and of 95 GeV on the diphoton mass are used offline. The photon candidates are required to pass the isolation criteria if the spatial distance in η-φ plane (∆R = √ (∆η) 2 + (∆φ) 2 ) between the two photons exceeds 0.3. The isolation selection is not used for photons that are coming from the decay of a highly Lorentz-boosted Higgs boson, as the two photons are likely to be found in the isolation cone of one another. The analysis is performed in two categories distinguished by the value of p miss T : high-p miss T (>130 GeV) and low-p miss T (50-130 GeV). The multijet background, with a large p miss T in an event originating from the mismeasurement of the energy of one or more jets, is reduced by allowing at most two jets with p T > 30 GeV. To suppress the contribution from the multijet background, the azimuthal separation between 5.3 The h(→ τ τ) + p miss T channel 9 the direction of any jet with p T > 50 GeV and p miss T is required to exceed 0.5 radians. Finally, to select signal-like events with the DM particles recoiling against the Higgs boson, the azimuthal separation between p miss T and the direction of the Higgs boson candidate reconstructed from the diphoton system is required to exceed 2.1 radians. More details of the event selection can be found in Ref. [32]. T channel In the h → ττ analysis, the three final states with the highest branching fractions are analyzed: τ h τ h , µτ h , and eτ h . The events are selected online with a trigger requiring the presence of two isolated τ h candidates in the τ h τ h final state, and a single-muon (single-electron) trigger in the µτ h (eτ h ) final state. Electron, muon, and τ h candidates passing the identification and isolation criteria are combined to reconstruct a Higgs boson candidate in these three final states. The signal events are then selected with the requirements: p miss T > 105 GeV and visible p T of the ττ system > 65 GeV. To ensure that the ττ system originates from the Higgs boson, the visible mass of the ττ system is required to be less than 125 GeV. In order to reduce the contribution from multilepton and tt backgrounds, the events are vetoed if an additional electron, muon, or a b tagged jet is present. More details of the event selection can be found in Ref. [32]. T channel The search in the h → WW decay channel is performed in the fully leptonic, opposite-sign, different-flavor (eµ) final state, which has relatively low backgrounds. The presence of the neutrinos and the DM particles escaping detection results in large p miss T in signal events. The selected eµ + p miss T events include a contribution from the h → WW → ττν τ ν τ process with both τ leptons decaying leptonically. Several background processes can lead to the same final state, dominated by tt and WW production. Online, events are selected using a suite of single-and double-lepton triggers. In the offline selection, the leading (subleading) lepton is required to have p T > 25 (20) GeV. Electron and muon candidates are required to be well-identified and isolated to reject the background from leptons inside jets. Backgrounds from low-mass resonances are reduced by requiring the dilepton invariant mass (m ) to exceed 12 GeV, while backgrounds with three leptons in the final state are reduced by vetoing events with an additional well-identified lepton with p T > 10 GeV. The p miss T in the event is required to exceed 20 GeV in order to reduce the contribution from instrumental backgrounds and Z/γ * → τ + τ − decays. To suppress the latter background, the p T of the dilepton system is required to be greater than 30 GeV and the transverse mass of the dilepton and p miss T system, m h T , is required to be greater than 40 GeV. In order to reduce the Z/γ * → e + e − , µ + µ − or τ + τ − background with p miss T originating either from τ lepton decays or from mismeasurement of the energies of e, µ or additional jets, a variable p miss T,proj [85] is introduced. This is defined as the projection of p miss T in the plane transverse to the direction of the nearest lepton, unless this lepton is situated in the opposite hemisphere to p miss T , in which case p miss T,proj is taken to be p miss T itself. A selection using this variable efficiently rejects Z/γ * → background events, in which the p miss T is preferentially aligned with leptons. Since the p miss T resolution is degraded by pileup, a quantity p miss T,mp is defined as the smaller of the two p miss T,proj values: the one based on all the PF candidates in the event, and the one based only on the reconstructed tracks originating from the primary vertex. A requirement p miss T,mp > 20 GeV is effective in suppressing the targeted background. The above requirements define the event preselection. The expected signal significance is enhanced by introducing two additional selections: m < 76 GeV and the distance in η-φ space between the two leptons ∆R < 2.5, as illustrated in Fig. 3. The first requirement exploits the fact that the invariant mass of the leptons coming from the h → WW decay tends to be low because of the presence of the two neutrinos in the decay chain and of the scalar nature of the Higgs boson. The second requirement utilizes the fact that the Higgs boson in signal events recoils against the DM particles and is highly boosted. Background estimation Since full kinematic reconstruction of the Higgs boson mass and p T is impossible in this decay channel because of the presence of undetected neutrinos and DM particles, to maximize the sensitivity of the search, a boosted decision tree (BDT) multivariate classifier has been trained for each of the two signal models. The BDT exploits the following input variables: Here, m defines the transverse mass of p miss T and the leading (subleading) lepton in the event, and ∆φ is the azimuthal angle between the directions of the two lepton momenta. For both benchmark models, the BDT training considers processes with two prompt leptons and genuine p miss T (WW, tt, tW, and h → WW production) as the backgrounds. For the Z - The main background processes arise from top quark (tt and single top quark production, mainly tW), nonresonant WW events, and nonprompt leptons. The contribution of nonpromptlepton background in the SR is determined entirely from data, while the contributions of the top quark, WW, and Z/γ * → τ + τ − background are estimated using simulated samples. The normalizations of simulated backgrounds are obtained using dedicated CRs that are included in the maximum-likelihood fit used to extract the signal, together with the SR. Smaller backgrounds, WZ and Wγ * , are estimated using simulation after applying a normalization factor estimated in the respective CRs. The WZ CR is defined by requiring the presence of two opposite-sign, same-flavor leptons, compatible with the decay of a Z boson and one additional lepton of a different flavor, consistent with originating from a W boson decay. In the Wγ * CR, the two leptons produced by the decay of the virtual photon are required to have p T > 8 GeV and be isolated. Since the two leptons may be close to each other, the isolation is computed without taking into account the contribution of lepton tracks falling in the isolation cone. An additional lepton consistent with originating from the W decay is required. The WZ and Wγ * CRs are not used in the maximum-likelihood fit; instead, the normalization scale factors are extracted and directly applied to the corresponding simulated samples. The remaining backgrounds from diboson and triboson production are estimated directly from simulation. The gg → W + W − and qq → W + W − backgrounds are estimated from simulation normalized as discussed in Section 3. The main feature of these processes is that, as the two W bosons do not originate in a decay of the Higgs boson, their invariant mass does not peak at the Higgs boson mass. For this reason, events in the corresponding CR are required to have a large dilepton invariant mass, achieved by inverting the SR m < 76 GeV requirement. The estimation of the top quark background is performed in two steps. First, a top quark enriched CR is defined to measure a scale factor quantifying the difference in the b tagging efficiencies and mistag rates in data and simulation. This CR is obtained from the SR selection by inverting the b tagged jet veto. In second step, the scale factor is applied to the corresponding simulated samples with a weight per event that depends on the number, flavor, and kinematic distributions of jets. The W + jets production contributes as a background in the h → WW analysis when a jet is misidentified as a lepton. A CR is defined to contain events with one isolated lepton and another lepton candidate that fails the nominal isolation criteria, but passes a looser selection. The probability for a jet satisfying this looser selection to pass the nominal one is estimated from data in an independent sample dominated by nonprompt leptons from multijet production. This probability is parameterized as a function of the p T and η of the lepton and applied to the events in the CR. In order to estimate the nonprompt lepton contamination in the SR, a validation region enriched in nonprompt leptons is defined with the same requirement as the SR, but requiring same-sign eµ pairs. The maximum discrepancy between data and prediction in the validation region, amounting to ≈30%, is taken as the uncertainty in the W + jets background prediction. The Z/γ * → τ + τ − background is estimated from simulation, after reweighting the Z boson p T spectrum to match the distribution measured in data. The normalization of the simulated sample is estimated from data using events in the m h T < 40 GeV region. A normalization factor is then extracted from this region and applied to the SR. The main difference between the present analysis and the measurement of the SM Higgs boson properties in the same channel [85] is in the signal extraction method. The latter analysis uses a multidimensional fit to the m h T , m , and p 2 T distributions, whereas a fit to the BDT discriminant distribution is used in the present analysis. The signal event topology is defined by the presence of four charged leptons (4e, 4µ, or 2e2µ) and significant p miss T produced by the undetected DM particles. The events are selected online with triggers requiring the presence of two isolated leptons (ee, µµ, or eµ), with asymmetric p T thresholds of 23 (17) GeV on the leading and 12 (8) GeV on the subleading electron (muon). Dilepton triggers account for most of the signal efficiency in all three final states. In order to maximize the signal acceptance, trilepton triggers with lower p T thresholds and no isolation requirements are added, as well as single-electron and single-muon triggers with isolated lepton p T thresholds of 27 and 22 GeV, respectively [42]. The h(→ ZZ The reconstruction and selection of the Higgs boson candidates proceeds first by selecting two Z boson candidates, defined as pairs of opposite-sign, same-flavor leptons (e + e − , µ + µ − ) passing the selection criteria and satisfying 12 < m (γ) < 120 GeV, where the Z boson candidate mass m (γ) includes the contribution of photons identified as coming from final-state radiation [42]. The ZZ candidates are then defined as pairs of Z boson candidates not sharing any of the leptons. The Z candidate with the reconstructed mass closest to the nominal Z boson mass [86] is denoted as Z 1 , and the other one is denoted as Z 2 . All the leptons used to select the Z 1 and Z 2 candidates must be separated by ∆R( i , j ) > 0.02. The leading (subleading) of the four leptons must have p T > 20 (10) GeV, and the Z 1 candidate must have a reconstructed mass m Z 1 above 40 GeV. In the 4e and 4µ channels, if an alternative Z i Z j candidate based on the same four leptons is found, the event is discarded if m Z i is closer to the nominal Z boson mass than m Z 1 . This requirement rejects events with an on-shell Z boson produced in association with a low-mass dilepton resonance. In order to suppress the contribution of QCD production of low-mass dilepton resonances, all four opposite-sign pairs that can be built with the four leptons (regardless of the lepton flavor) must satisfy m i j > 4 GeV and the four-lepton invariant mass must satisfy m 4 > 70 GeV. If more than one ZZ candidate passes the selection, the one with the highest value of the scalar p T sum of four leptons is chosen. The above requirements define the event preselection. The m 4 distribution for selected ZZ candidates exhibits a peak around 125 GeV, as expected for both the SM Higgs boson production and signal. However, because of the much lower cross section, the potential signal is overwhelmed by the background after the SM Higgs boson selection, as shown in Fig. 4 (left). The distribution of p miss T for selected ZZ candidates is shown in Fig. 4 (right). After the preselection, the remaining background comes from the SM Higgs boson (mostly Vh), tt+V, and VV/VVV production. Another background dominated by the Z+jets production ("Z+X") [42] arises from secondary leptons misidentified as prompt because of the decay of heavy-flavor hadrons and light mesons within jets, and, in the case of electrons, from photon conversions or charged hadrons overlapping with photons from π 0 → γγ decays. The nonprompt-lepton background also contains smaller contributions from tt+jets, Zγ+jets, WZ+jets, and WW+jets events, with a jet misidentified as a prompt lepton. These backgrounds do not exhibit peak in the distribution of m 4 , and are reduced by applying a selection on the m 4 around the Higgs boson mass (115 < m 4 < 135 GeV), by rejecting events with more than four leptons, and by requiring the number of b tagged jets in the event to be less than two. Background estimation The dominant irreducible backgrounds from the SM Higgs boson and nonresonant ZZ production are determined from simulation, while the Z+X background is determined from data [42]. All other backgrounds are determined from simulation. Background contributions from the SM Higgs boson production in association with a Z boson or a tt pair, followed by the h → WW → 2 2ν decay, have been studied with simulated events and found to be negligible. The Z+X background is estimated from data by first determining the lepton misidentification probability in a dedicated CR and then using it to derive the background contribution in the SR. The lepton misidentification probability is defined as the probability that a lepton passing a loose selection with relaxed identification or isolation criteria also passes the tight selection criteria. The misidentification probability is measured in a Z+lepton CR where the Z boson candidate (with the mass within 7 GeV of the nominal Z boson mass) is formed from the two selected leptons passing the tight identification criteria, and an additional lepton is required Table 2: Summary of the maximum number of additional objects allowed in an event for each analysis. A dash means that no restriction on the corresponding object is applied in the corresponding analysis. to pass the loose selection. This sample is dominated by Z+nonprompt-lepton events. The electron and muon misidentification probabilities are measured as functions of the lepton candidate p T , its location in the barrel or endcap region of the ECAL or the muon system, and p miss T in the event, using Z(→ )+e and Z(→ )+µ events, respectively, in the Z+lepton CR. The misidentification probabilities are found to be independent of the charge of the lepton within the uncertainties. The strategy for applying the lepton misidentification probabilities relies on two additional CRs. The first CR is defined by requiring that the two leptons that do not form the Z 1 candidate, pass only the loose, but not the tight identification criteria. This CR defines the "2 pass + 2 fail" (2P2F) sample and is expected to be populated by events that intrinsically have only two prompt leptons (mostly from DY production, with a small contribution from tt and Zγ events). The second CR is defined by requiring only one of the four leptons to fail the tight identification and isolation criteria and defines the "3 pass + 1 fail" (3P1F) sample, which is expected to be populated by the type of events that populate the 2P2F CR, but with different relative proportions, as well as by WZ+jets events with three prompt leptons. Statistical combination of the search channels The analyses in the five channels described above are almost completely statistically independent of each other, allowing these analyses to be combined without accounting for the possibility of events being selected in more than one final state. Whenever an explicit veto ensuring the strict mutual exclusivity of the channels is not placed in a particular analysis, it was checked that there are no overlapping events with the other channels. The summary of the vetoes on additional objects, namely electrons, muons, τ leptons, photons, jets, and b tagged jets, in each analysis is presented in Table 2. These selections not only reduce the major backgrounds, but also ensure the nearly complete mutual exclusivity of the analyses considered for the combination. The overlap in the SR is zero and for the CR it is less than 0.01%, i.e., it is much smaller than the systematic uncertainty in the analysis. For the Z -2HDM, the two parameters that we scan are m Z and m A . All five analyses contribute to the combination in the ranges 800 < m Z < 2500 GeV and 300 < m A < 800 GeV. For m Z < 800 GeV, it is not possible to perform the h → bb analysis efficiently, therefore only four other decay channels are used for the combination. For m Z > 2500 GeV and m A > 800 GeV the signal selection efficiency is significant only for the h → bb decay mode, hence only the h → bb channel contributes in this region. For the baryonic Z model, the two parameters that we scan are m Z and m χ , and all five analyses are performed in the full phase space considered for the combination. Since the maximum sensitivity for all the analyses is achieved for m χ = 1 GeV, the comparison of individual analyses is shown only for this DM particle mass, to demonstrate the improvement in the sensitivity achieved in the combination of individual channels. Systematic uncertainties A number of systematic uncertainties are considered in the combination, broadly divided into two categories: theoretical and experimental. Theoretical uncertainties are considered fully correlated among all five channels. Only the systematic uncertainties attributed to the experimental sources that are correlated between different channels are described for the combined result in section 7.3. The details of all experimental systematic uncertainties in the h → bb analysis using AK8 jets are described in Ref. [30] and those for the analysis using CA15 jets are described in Ref. [31]; for the h → γγ and h → ττ channels they are given in Ref. [32]; and for the h → WW and h → ZZ analyses they are discussed in this section. T channel The normalization and the kinematic shapes of the BDT discriminant distributions for the main backgrounds are derived from data CRs, and therefore systematic uncertainties in both the normalization and shapes are considered. For the nonprompt-lepton background the uncertainty amounts to approximately 30%, and covers the uncertainty in the lepton misidentification rate, the dependence on the CR background composition, and the statistical component because of the finite event count in the CR. The top quark background CR is included as an additional category in the signal extraction fit. The kinematic shapes of the top quark background are taken from simulation corrected for the b tagging scale factors, with the uncertainties covering the difference between the b tagging efficiency in data and simulation [82]. A similar procedure is applied for the DY background, by defining a CR in low-m T phase space, and to the nonresonant WW background, for which a high-m CR is defined. Experimental uncertainties are estimated by applying scale factors between data and simulation, and/or by smearing of certain kinematic variables in simulation, with the corresponding changes further propagated to all analysis variables. The signal acceptance uncertainty associated with the combination of single-lepton and dilepton triggers is measured to be 2%. The uncertainty in the ratio between the single top quark and top quark pair production cross sections, 8% at 13 TeV [87], has been also included, as it affects the top quark background yield from the maximum-likelihood fit used to extract the signal and dominant backgrounds. The uncertainty in the p T spectrum of the top quark has been applied to all the observables in order to cover the difference between the simulated and observed spectra [88], and is of the order of 1%. The uncertainty in the Higgs boson branching fraction for the h → WW decay is about 1% [67]. The uncertainty in the NNLO K factor applied to the LO gg → WW cross section estimate is 15% [89]. The p WW T spectrum in the qq → WW sample has been reweighted to match the resummed calculation [56,57]. The associated shape uncertainties related to the missing higherorder corrections are modeled by varying the factorization, renormalization, and resummation scales up and down independently by a factor of 2 from their nominal values [56]. Finally, uncertainties arising from the limited size of the simulated samples are included for each bin of the BDT discriminant distributions, in each category. The main sources of the uncertainties affecting the analysis are listed in Table 3. T channel A source of systematic uncertainty in the nonprompt-lepton background estimate potentially arises from the difference in the composition of the SM background processes with nonprompt leptons (Zγ+jets, tt, Zγ+jets) contributing to the CRs where the lepton misidentification rate is measured and applied. This uncertainty can be estimated by measuring the misidentification rates in simulation for the 2P2F and 3P1F CRs. Half of the difference between the misidentification rates obtained from simulation in these two CRs is used as a measure of the systematic uncertainty in the lepton misidentification rate and is further propagated to the uncertainty in the nonprompt-lepton background, and amounts to 43% for the 4e, 36% for the 4µ, and 40% for the 2e2µ final states. The uncertainty in the full signal selection efficiency is at the level of 1%. The uncertainty in the m 4 resolution from the uncertainty in the per-lepton energy resolution is about 20% [42] and affects the signal and all the backgrounds from Higgs boson production. In addition, there are two types of systematic uncertainties related to the modeling of p miss T . The first uncertainty is related to the approximately Gaussian core of the resolution function for correctly measured jets and other physics objects and corresponds to the uncertainty in the genuine p miss T . The second uncertainty, attributed to significant mismeasurement of p miss T , is an uncertainty in the "mismeasured" p miss T . The uncertainties from the modeling of genuine p miss T are measured by varying the parameters associated with the corrections applied to p miss T and by propagating those variations to the p miss T calculation, after applying the full analysis selection. Each correction is varied up and down by one standard deviation of the input distribution. The corrections used in this calculation come from JES, JER, muon, electron, photon, and the unclustered energy scales. The uncertainty in the mismeasured p miss T is obtained from a sample with significant contributions from misidentified leptons and mismeasured jets, obtained by requiring an oppositesign, same-flavor dilepton pair passing the Z 1 candidate selection, and an additional same-sign, same-flavor pair ("OS+SS" sample). This sample is enriched in misidentified leptons that form the same-sign pair and is expected to lead to significant mismeasurement of p miss T , not already covered by the uncertainties in the Gaussian core discussed above. We derive the mismeasured p miss T uncertainty from the comparison of the p miss T shapes in the "OS+SS" sample and in the SR, with a requirement that the m 4 be outside the Higgs boson invariant mass peak (|m 4 − 125 GeV| > 10 GeV). The uncertainty in mismeasured p miss T is applied to the Z+X sample only, since the effect is expected to be negligible when four genuine leptons are produced, as is the case for the signal and for most of the simulated background samples. An uncertainty of 10% in the K factor used for the gg → ZZ prediction is applied [89]. A systematic uncertainty of 2% in the h → ZZ → 4 branching fraction [67] affects both signal and the SM Higgs boson background yields. Theoretical uncertainties in the tt+V background cross sections are taken from Ref. [90]. A summary of the experimental uncertainties is given in Table 4. Systematic uncertainties in the combination The uncertainties associated with the background normalization and fit parameters are assumed to be uncorrelated, whereas those associated with the standard object selection are considered fully correlated and are summarized in Table 5. In all five decay channels, a normalization uncertainty of 2.5% for simulated samples is used to account for the uncertainty in the measurement of the integrated luminosity [91]. Also fully correlated across all channels are the systematic uncertainties related to theoretical calculations of the Higgs boson production Table 5: Systematic uncertainties in the combination of channels, along with the type (rate/shape) of uncertainty affecting signal and background processes, correlated amongst at least two final states. For the rate uncertainties, the percentage of the prior value is quoted, while for shape uncertainties an estimate of the impact of systematic uncertainties on the yield is also listed. A dash ("-") implies that a given uncertainty does not affect the analysis. Whenever an uncertainty is present but kept uncorrelated in a particular channel, this is mentioned explicitly. The effect of the b jet mistag rate uncertainty is very small in the h → bb Z'-2HDM analysis and hence it is added to the effect of the b tagging efficiency uncertainty in quadrature. cross section, PDFs, and renormalization and factorization scale uncertainties estimated using the recommendations of the PDF4LHC [92] and LHC Higgs Cross Section [67] working groups, respectively. These uncertainties range from 0.3 to 9.0%. Uncertainties from imprecise knowledge of the JES are evaluated by propagating the uncertainties in the JES for individual jets in an event, which depend on the jet p T and η, to all the analysis quantities. The uncertainties in the selection of b tagged AK4 jets are taken into account using the uncertainties in the b tagging efficiency and misidentification rate estimated from the difference between data and simulation [82]. The uncertainty due to the difference in the performance of electron, muon, and τ lepton identification between data and simulation is taken into account for individual decay channels and considered fully correlated in the statistical combination. An uncertainty of 1-3% in the electron energy scale and an uncertainty of 0.4-1.0% in the muon energy scale are considered to be correlated in the combination. Results The event selection described in Section 5 has been used to discriminate the mono-Higgs signal from backgrounds in each channel. The observed yields in data and the expected event yields for the signal and background processes in the h → bb, h → γγ, and h → ττ channels can be found in Refs. [30][31][32]. The corresponding yields for the h → WW and h → ZZ analyses are discussed in Section 8.1. Tables 6, 7 and figures 5, 6 show one signal mass hypothesis for each model, normalized to the respective cross section. For the Z -2HDM, the signal is normalized to the cross section calculated for mass values of Z and A bosons of 1200 and 300 GeV, respectively, and for g Z = 0.8, tan β = 1. For the baryonic Z model, the signal is normalized to the cross section corresponding to the Z and m χ masses of 500 and 1000 GeV, respectively, and for g χ = 1, g q = 0.25. The expected background yields and the observed number of event in data, along with the expected yields for two signal benchmarks in the h → WW and h → ZZ channels, are summarized in Tables 6 and 7, respectively. Figure 5 shows the BDT discriminant distribution for the expected backgrounds and observed events in data for the h → WW analysis. Benchmark signal contributions in the Z -2HDM (left) and baryonic Z (right) model are also shown, scaled by the factors of 500 and 100, respectively, for better visibility. Figure 6 shows the p miss T distribution of the expected backgrounds and observed events in data for the h → ZZ analysis. Benchmark signal contributions are also shown. For both analyses, the total uncertainty, given by a quadratic sum of the statistical and systematic components, is shown. The bottom panels show the ratios of data to the total background prediction with their total uncertainties. The potential signal is extracted from the fit to the BDT discriminant (p miss T ) spectrum with a signal-plus-background hypothesis for the h → WW (h → ZZ) channel. The profile likelihood ratio is used as a test statistic, in an asymptotic approximation [93]. Data agree well with the expected background and no signal is observed in either channel. Limits on the model parameters at 95% confidence level (CL) are set using the modified frequentist CL s criterion [94][95][96] with all the nuisance parameters profiled. The observed and expected upper limits on the DM candidate production cross section are shown in Fig. 7 for the h → WW (upper) and h → ZZ (lower) channels for the Z -2HDM with m A = 300 GeV (left) and for the baryonic Z model with the value of m χ fixed at 1 GeV (right). All other model parameters are fixed to the values described in Section 1. The upper limits for the h → ZZ analysis already include the statistical combination of all three final states used. The h → WW analysis excluded the region from 780 to 830 GeV for m A = 300 GeV in the Z -2HDM. Results of the statistical combination The observed and expected upper limits at 95% CL on the DM production cross section normalized to the predicted cross section, as a function of m Z , from the combination of all five channels are shown in Fig. 8 for the Z -2HDM with m A = 300 GeV (left) and for the baryonic For the Z -2HDM, the combination is dominated by the h → bb analysis for m Z > 800 GeV. However, the h → bb analysis has no sensitivity for m Z values below 800 GeV, and a combination of the h → γγ and h → ττ channels plays a significant role in this region of the model parameter space. The range of m Z excluded at 95% CL spans from 500 to 3200 GeV for m A = 300 GeV. For the baryonic Z model, the combination results are also dominated by the h → bb channel, but the h → γγ and h → ττ channels also provide a nonnegligible contribution in constraining the model parameters. The range of m Z excluded at 95% CL spans from 100 to 1600 GeV for m χ = 1 GeV. Figure 9 shows the observed and expected 95% CL exclusion contours on σ/σ th in the m Z -m A and m Z -m χ planes for the Z -2HDM (left) and baryonic Z (right) model, respectively. The results for the Z -2HDM are also interpreted in the m Z -tan β plane for three different m A values: 300, 400, and 600 GeV. Since the shape of the p miss T distribution does not change with tan β, and affects only the product of the Z production cross section and branching fraction to the mono-h channel, the limit shown in Fig. 9 (left) can be simply rescaled for different values of tan β, from 0.5 to 10. These limits, in the m Z -tan β plane, are shown in Fig. 10 which also couples to the SM quarks. A point in the parameter space of this model is determined by four variables: the DM particle mass m χ , the mediator mass m med , the mediator-DM coupling g χ , and the universal mediator-quark coupling g q . The couplings for the present analysis are fixed to g χ = 1.0 and g q = 0.25, following the recommendation of Ref. [37]. The results are interpreted in terms of 90% CL limits on the spin-independent (SI) cross section σ SI for the DM-nucleon scattering. The value of σ SI for a given set of parameters in the s-channel simplified DM model is given by [37]: where µ nDM is the reduced mass of the DM-nucleon system and f (g q ) is the mediator-nucleon coupling, which depends on g q . The resulting σ SI limits, as a function of m χ are shown in Figure 9: The upper limits at 95% CL on the observed and expected σ/σ th in the m Z -m A and m Z -m χ planes for the Z -2HDM (left) and baryonic Z model (right), respectively. The region enclosed by the contours is excluded using the combination of the five decay channels of the Higgs boson for the following benchmark scenarios: g Z = 0.8, g χ = 1, tan β = 1, m χ = 100 GeV, and m A = m H = m H ± for the Z -2HDM, and g χ = 1, g q = 0.25 for the baryonic Z model. Figure 11: The upper limits at 90% CL on the DM-nucleon spin-independent scattering cross section σ SI , as a function of m χ . Results obtained in this analysis are compared with those from the CMS dijet analyses [39, 41] and from several direct-detection experiments: CRESST-II [97], CDMSLite [98], PandaX-II [99], LUX [100], XENON-1T [101], and CDEX-10 [102]. Fig. 11. Results obtained in this analysis are compared with those from the CMS dijet analyses 1 [39, 41] and from several direct-detection experiments. For the chosen set of parameters, the cross section limit from the present analysis is more stringent than the direct-detection limits for m χ between 1 and 5 GeV. Summary A search for dark matter particles produced in association with a Higgs boson has been presented, using a sample of proton-proton collision data at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb −1 . Results from five decay channels of the Higgs boson, h → bb, h → γγ, h → τ + τ − , h → W + W − , and h → ZZ, are described, along with their statistical combination. No significant deviation from the standard model prediction is observed in any of the channels or in their combination. Upper limits at 95% confidence level on the production cross section of dark matter are set in a type-II two Higgs doublet model extended by a Z boson and in a baryonic Z model. The results in the baryonic Z model are also interpreted in terms of the spin-independent dark matter nucleon scattering cross section. This is the first search for DM particles produced in association with a Higgs boson decaying to a pair of W or Z bosons, and the first statistical combination based on five Higgs boson decay channels. Acknowledgments We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. [16] ATLAS Collaboration, "Search for an invisibly decaying Higgs boson or dark matter candidates produced in association with a Z boson in pp collisions at √ s = 13 TeV with the ATLAS detector", Phys. Lett. B 776 (2018) 318, doi:10.1016/j.physletb.2017.11.049, arXiv:1708.09624.
15,530.2
2020-03-01T00:00:00.000
[ "Physics" ]
Journal of Materials Chemistry B Enhanced mechanical performance and wettability of PHBV fiber blends with evening primrose oil for skin patches improving hydration and comfort † Materials Chemistry B The growing problem of skin diseases due to allergies often causing atopic dermatitis, which is characterized by itching, burning, and redness, constantly motivates researchers to look for solutions to soothe these effects by moisturizing skin properly. For this purpose, we combined poly(3-hydroxy-butyrate- co -3-hydroxyvalerate) (PHBV) electrospun fibers with evening primrose oil (EPO) into a system of patches to ensure skin hydration. Moreover, the dressing or patch application requires appropriate stretchability and wettability of the electrospun material. Thus, we examined the mechanical properties of the PHBV blend with EPO, as well as changes in wettability of the fiber surface depending on the share of EPO additive in the blend. The effectiveness of the patches has been characterized using the water vapor transmission rate as well as by the skin moisturizing index. The thermal insulation effect of the patches on human skin has been verified as well. The patches made by combining the polymer with natural oil showed enhanced mechanical properties and increased skin hydration, indicating the potential applicability of PHBV-based patches. The presented discovery of PHBV patches with EPO is a prospective and alternative treatment for patients for whom current state-of-the-art methods do not bring satisfactory results. Introduction The skin is the largest organ in humans, and its three layers collectively and individually work to protect all internal tissues and vital organs against daily environmental challenges. The skin also plays an important role in controlling water loss and regulating body temperature. Atopic dermatitis is associated with deficiencies in macro-and micronutrients. 1 Evening primrose oil (EPO) is approved in many countries as a supportive treatment of atopic dermatitis. 2 It has been reported to restore defective epidermal barriers, reduce excessive trans-epidermal water loss, and improve skin smoothness. 3 EPO is a rich source of gamma-linolenic acid (g-GLA), a substance that can be of great importance in the treatment of eczema. A deficiency of essential fatty acids in the skin is one of the factors of eczema. 4,5 EPO is extremely high in linoleic acid (LA) (70-74%) and g-linolenic acid (GLA) (8-10%), which can contribute to the proper functioning of human tissue because they are precursors of antiinflammatory eicosanoids. 6 Moreover, GLA has antibacterial properties, especially with respect to Staphylococcus aureus, which is a very common problem in patients with atopic dermatitis. 2 Essential fatty acids (EFAs) are necessary for the proper functioning of the body. Linoleic acid belongs to the group of essential fatty acids. EFAs cannot be produced endogenously, and should therefore be obtained exogenously from food sources. Evening primrose oil is a good source of omega-6 fatty acids, which are EFAs. 6,7 For skin diseases, it is extremely important to relieve patients' pain caused by dry skin and itching. Wet dressings are commonly used to provide relief and protect against transepidermal water loss. [8][9][10] In general, multifunctional skin patches should be soft, flexible, stress-resistant, and durable but easy to remove from the skin. 11 There are many reports in the literature on the use of electrospun fibers for skin dressings. [11][12][13][14][15][16][17][18][19] The high surface area of electrospun fibers affords enormous possibility for not only drug delivery but also as multifunctional bandages. [20][21][22] Direct blending biocompatible polymers with natural oils to enhance the oil delivery for skin moisturization effects helps in maintaining the problematic skin, for example with eczema. 23,24 Poly(3-hydroxybutyrate-co-3hydroxyvalerate) is widely used as a biomaterial due to its high biocompatibility, biodegradability, but also high stability over time. [25][26][27][28][29] As mentioned earlier, the dressings used on the skin should have adequate stretchability so as not to cause discomfort and be adequately durable. Unfortunately, PHBVbased materials have weaker mechanical properties compared to other biodegradable materials, e.g. PLA. 30,31 Therefore, it is extremely important to improve the mechanical properties of PHBV, in particular, to increase flexibility and extensibility, which are crucial features for dressings. Fibrous PHBV membranes are also hydrophobic, 28,32,33 so improving their wettability extends the application of PHBV as bandages. Within this study, we want to achieve the effect of increasing the hydration of dry skin, that suffers from atopic dermatitis by delivering oil to the skin for 6 hours using electrospun PHBV patches with the desired mechanical properties. By moisturizing the skin, we want to provide comfort and reduce the risk of breaking down the skin barrier by germs or allergens for the patient, bringing relief for instance calming itching or reducing flare-ups that often occur in patients' eczema. We developed a blend based on PHBV polymer and EPO to obtain electrospun membranes. We characterized the morphology, chemical composition, thermal and mechanical properties of these materials. Moreover, we investigated the wettability of modified fibers, water vapor and heat permeability, and skin hydration change in volunteers using the designed blend and PHBV patches. Previous studies indicated that the addition of oil in blend electrospinning reduces the wetting contact angle of the electrospun membrane. 34 Importantly, our research has shown that by mixing natural oil with a biodegradable polymer, we are able to produce electrospun fibers for skin dressing or patch applications. In addition, when oil is added, the mechanical properties are significantly improved, and soaking of the PHBV-based membranes with additional oil for skin delivery is enhanced. The mechanical properties of blends are often correlated with the structural changes of electrospun materials, therefore we also perform the calorimetric studies for our electrospun membranes. We clearly demonstrate the advantages of blending PHBV with EPO for dressing applications increasing skin moisture and hydration. Experimental Electrospinning PHBV fibers were prepared by electrospinning PHBV (8 wt%) solution in chloroform and N,N-dimethylformamide (DMF) (volume ratio 9 : 1, Sigma Aldrich, UK) mixed within 4 h. Evening primrose oil (Oenothera biennis, OlVita, Poland) was mixed with the polymer solution (in proportions of 0.5 g and 1.0 g EPO) for 30 min just before electrospinning of PHBV, which was carried out at a temperature of T = 20 1C and humidity of RH = 40%. The positive voltage polarity (+17 kV) was applied to the stainless needle with an outer diameter of 0.8 mm and inner diameter of 0.5 mm, which was kept at a distance of 20 cm from the grounded collector. The polymer solution flow rate was 0.1 ml min À1 and the distance was maintained at 20 cm between the nozzle and the grounded collector. Scanning electron microscopy The morphology of the PHBV fibers was evaluated using scanning electron microscopy (SEM, Merlin Gemini II, Zeiss, Germany) at the accelerated voltage of U = 1-3 kV and a working distance of 7-10 mm. All samples were coated with a 15 nm gold layer using the rotary pump sputter coater (Q150RS, Quorum Technologies, UK) before imaging. Fiber diameters (D f ) were measured from SEM micrographs using ImageJ (v. 1.53c, USA) to correlate 100 measurements into histograms with the standard deviations. Attenuated total reflectance -Fourier transform infrared spectroscopy (ATR-FTIR) All spectra were recorded using a TENSOR II Bruker spectrophotometer by ATR technique using the diamond crystal. During measurements, 64 spectra were repeated over the wavenumber range 350-4000 cm À1 , with resolution 1 cm À1 . Reference samples were PHBV fibers and oil, used separately. Differential scanning calorimetry (DSC) Thermal characterization was carried out using a differential scanning calorimeter (DSC, Mettler Toledo, Columbus, OH, USA) at a heating rate of 10 1C min À1 in the À25 to 200 1C temperature range. Measurements were carried out in a dynamic nitrogen atmosphere for the sample placed in aluminum pans. The enthalpy of melting is the integrated area under the melting peak, the degree of crystallinity was calculated from the formula: where DH m100 is the enthalpy of 100% crystalline PHBV (146.6 J g À1 ). [35][36][37][38] Mechanical testing The mechanical properties of PHBV fibrous membranes were measured using a tensile module with a 20 N load cell (Kammrath Weiss GmbH, Dortmund, Germany). Fibrous membranes were cut into rectangles (10 Â 15 mm) and placed in the clamps. Each type of sample was tested five times. Mechanical tests were performed uniaxially with an extension speed of 15 mm s À1 . Maximum stress and strain were calculated from stress-strain curves. The thickness of the samples was measured from SEM images using ImageJ. Water contact angle The wettability of the PHBV-based membranes was determined by measuring the static contact angle. We used 3 ml drops of deionized water (DI water, HLP 5UV purification system -Hydrolab, Straszyn, Poland) at T = 22 1C and H = 35%. The images of 5 droplets were taken within 15 s from the deposition, with the 1 s interval using Canon EOS 700D camera with EF-S 60 mm f/2.8 Macro USM zoom lens. The contact angles were measured using ImageJ, the average values were calculated from 10 measurements with standard deviations. Zeta potential The zeta potential measurement allowed us to determine changes in surface charges across the entire volume of fibrous membranes. The zeta potential (streaming potential) of the PHBV fibrous samples was measured using an electrokinetic analyzer SurPASS 3 (Anton Paar, Austria) with a cylindrical cell. Titration curves were obtained using zeta potential measurements as a function of pH in 0.01 M KCl electrolyte solution. The pH variation from 2.5 to 9.0 was obtained with progressive addition of 0.05 M HCl or 0.05 M NaOH to the solution for the acid and basic ranges, respectively. PHBV samples were cut out, crushed and put into the measuring cell. Each point is the value of the zeta potential averaged from 4 measurements, error bars correspond to the standard deviations in both the potential change and the pH change. Oil spreading test Evening primrose oil (10 ml) was pipetted on a 5 Â 5 cm sized electrospun membrane. After the oil deposition, 7 pictures of the oil-spread areas were taken with 30 min intervals using a Canon EOS 700D camera with EF-S 18-55 mm (f/3.5-5.6) zoom lens from the top and a Canon EOS 250D camera with EF-S 60 mm f/2.8 Macro USM zoom lens from the bottom of the samples. The surface area of the oil spread on fibers was measured using ImageJ. The mean values and standard deviations were calculated from four replicates per sample. Water vapor and heat transmission rate To perform water vapor transmission tests, glass beakers with 5 ml distilled water were covered and wrapped with fibrous samples PHBV, PHBV + 5% EPO and PHBV + 10% EPO. We used gauze as a control sample in this WVTR experiment. Then, the beakers were weighed and placed in water heated at 37 1C, RH = 40-45%. After 24 h, the beakers were weighed again and the WVTR coefficient was calculated from the following m 1 is the mass after 24 h [g] and S is the evaporation area [m 2 ]. As a control sample in WVTR, we used gauze. Additionally, gauze and fibrous membranes were wetted by pipetting 30 ml of EPO, the oil was allowed to drain for 5 min. The beakers were weighed before and after 24 h. A series of 3 replicates were performed for each type of sample. The heat transport was tested using a FLIR T560 thermal imaging camera with FLIR lens f = 17 mm (FLIR Systems, USA). PHBV and blend fibers were applied to the skin of the forearms on both hands. Fibers with an additional 30 ml of EPO were placed on one hand and dry fibers on the other hand. Thermal photos were taken immediately after the oil was dropped and after 3 h of patch application. Skin test The testing of the patches on human skin was performed on volunteers following ethical considerations regarding human testing of cosmetic products guidelines by EU according to the Council Directive (76/768/EEC) and World Medical Association Declaration Helsinki (1964-1975-1983-1989-1996). A skin moisture test was performed using rectangular fiber samples (2.5 Â 2.5 cm). The patches with 25 ml of evening primrose oil were applied to the skin. The moisture of the skin was measured using Corneometer s (Hydro Pen H10, EPRUS) before patch application and after 6 h at the same place. Three types of samples without oil deposition were used as control samples. Skin hydration tests were carried out on 5 volunteers using dry patches PHBV and blends with EPO as a control, and patches additionally soaked with oil. Morphology and sizes of fibers The addition of EPO causes changes in the morphology of PHBV fibers, which was observed using SEM. The microstructure of the obtained fibers is different due to the content of the oil in the electrospinning blend. PHBV fibers ( Fig. 1) have a homogeneous structure, free from pores and distortions, similar to previous results. 28,32 Electrospun fibers with 5% EPO have very small pores ( Fig. 1B), but still, they have a regular, smooth surface. However, fibers with 10% EPO (Fig. 1C) content have a rough surface caused by solvent evaporation. In Fig. 1D, the fiber diameters are presented as a box chart showing the mean fiber diameter, the interquartile interval and standard deviations. The fiber diameter increases with the added % of oil in the PHBV polymer solution. The addition of oil caused an increase in the viscosity of polymer solutions, which slows down the evaporation of solvents during electrospinning. The average diameter of PHBV fibers was 2.46 AE 0.31 mm for PHBV, with 5% EPO, it was 3.80 AE 0.42 mm, whereas, for fibers with 10% EPO, it was 5.92 AE 0.76 mm. The formation of fibers from a polymer blend with oil influenced morphological changes in them. In the case of polymer fibers, each change of electrospinning parameters or modification of the input material changes the structure of the obtained fibers, e.g. blends with polyethylene oxide, 39,40 gelatin, 41 chitosan, 42 polylactic acid, 43,44 and even with tea or oil extracts. 45 Chemistry analysis of electrospun patches Chemical analysis of electrospun patches was performed to confirm the presence of both components and the relation between them. IR spectroscopy analysis of evening primrose oil, PHBV fibers, and PHBV fibers and oil blends showed that these compounds contain characteristic bonds for the structure of fatty acid glycerides, see Fig. 2. The analysis for EPO indicates that the stretching vibration band of CQO groups was observed at 1740 cm À1 , and for the C-H bond from the olefin group appeared at 3012 cm À1 . 46 The C-H bonds of the -CH 2 -groups were detected at 2853 cm À1 (stretching) and at 1461 cm À1 (deformational). Moreover, the C-H bonds of the -CH 3 groups were observed at 2921 cm À1 (stretching) and 1378 cm À1 (deformational). The C-O stretching bands of the ester groups were detected at 1098, 1164, and 1236 cm À1 . The absorption peak at 721 cm À1 is associated with pendulum vibrations of the -CH 2 -groups. 46 This journal is © The Royal Society of Chemistry 2022 In the case of PHBV, there are peaks in the spectrum that suggest whether we are dealing with a crystalline or amorphous structure. The most intense absorption peak for PHBV fibers at around 1719 cm À1 is associated with the CQO stretching vibration and it is characteristic of the crystalline form of PHBV. In this spectrum, we also see a broadening peak at around 1740 cm À1 suggesting a small amount of amorphous PHBV. 40,47 The C-O stretching bands were detected at 1055, 1129, and 1181 cm À1 . The presence of the -CH 2 -group was indicated by the peak at 2932 cm À1 . The absorption bands at range 827-980 cm À1 are related to C-H stretching. Furthermore, the C-H stretching bands were also observed at around 1227, 1279, and 2976 cm À1 and the C-H bending vibrations were detected at 1379 and 1453 cm À1 . 40,44,[47][48][49] Most bands for EPO and for PHBV fibers in the range of 1100-1453 cm À1 are similar, so interpretation of the results of Fig. 2 The ATR-FTIR spectra of EPO, PHBV fibers and blends of PHBV fibers with 5% and 10% EPO. the effect of oil content in fibers is challenging. In both, 5% and 10% blend of EPO, we can observe the characteristic PHBV bands in the range 826-980 cm À1 , 1056 cm À1 , or 1179 cm À1 . The increase in the absorbance intensity of the bands detected at the wavelength 1100 cm À1 , 1379 cm À1 or 1453 cm À1 results from overlapping and amplification of the peaks coming from both PHBV and EPO. However, as a result of the assembly of the bands at 1740 cm À1 for EPO and 1719 cm À1 for PHBV fibers, an increase in the intensity of peaks and wider bands on the EPO side of the peaks were observed, for both types of blends. In the case of blends with EPO, the peak at 1720 cm À1 has a higher intensity, which may suggest changes in the crystalline structure in the presence of EPO. 47 It is similar to the peaks at 1275, 1261 and 1226 cm À1 -characteristic for the crystal structure, they have a greater intensity on the spectra of blends than that for pure PHBV. It could suggest that the addition of EPO improves the crystallinity of PHBV. 40,47 Additionally, for fibers with 5% as well as 10% oil, the band of the C-H bond from olefin groups was detected at 3009 cm À1 , stretching vibrations of C-H bonds of the -CH 2 -groups at 2856 cm À1 and of the -CH 3 groups at 2930 cm À1 . 40,44,46,47 Thermal analysis of electrospun fibers DSC curves for cooling and the second heating cycle for PHBV fibers and polymer oil blends are shown in Fig. 3A and B, respectively. For pure PHBV fibers, crystallization peaks at 60 1C were found, resulting from the rapid crystallization of PHBV. 30 Compared to the pure PHBV fibers, the crystallization temperature of the oil blends was decreased, which indicates a reduced crystallization in PHBV and EPO mixed fibers (Fig. 3A). Each sample shows an intense endothermic peak between 135 to 155 1C, which characterizes the PHBV melting peak (Fig. 3B). The values of melting point (T m ), melting enthalpy (DH m ) obtained from the second heating scan, and the degree of crystallinity (X c ) for individual PHBV samples and oil blends are presented in Table 1. The melting points for individual samples do not differ significantly from each other, while the values of the melting enthalpy increase with increasing oil content in the blend. For pure PHBV fibers, a broad melting peak with one maximum can be observed, while for the polymer-oil mixture, two melting peaks are visible. Generally, a double melting peak is seen in thermal analyses of multi-component materials. 30,31,40,47,50 The first melting peak relates to the melting of the crystals formed during fiber manufacturing, while the second peak relates to the combination of the crystals formed during heating. 47,51 Clearly, the addition of EPO to PHBV affects the crystallization process of the tested fibers. The pure PHBV fibers show the highest degree of crystallinity (50%). Moreover, the analysis of the degree of crystallinity of all the samples shows a decrease in the degree of crystallinity with an increase in the oil content in the blend, respectively, 45% and 40% for PHBV + 5% EPO and 10% EPO. The differences in the crystallinity of the samples are also related to the fiber manufacturing process and morphology of the samples. 38 The additional proportion of the amorphous phase, which is oil to the polymer solution, reduces the crystallinity of the materials produced. In the case of polymer-oil blends, the solvent evaporation process can be slower and disturbed, which may affect the dynamics of crystallization, and thus the degree of crystallinity of the materials obtained. 37 Faster evaporation is freezing the polymer chain alignment, but larger polymer chains can be slower or need more time to relax, thus the crystallization can be lower. 37,52 Moreover, in the electrohydrodynamic processes the increase of the viscous drag can decrease the stretching of polymer solution if the electric field or flow rate is not increased. [53][54][55] The differences between the samples for the thermal characteristic peaks and the bands of chemical bonds indicate that the oil is built into the polymer structure. This is confirmed by both the increase in the intensity of the characteristic bonds tested with FTIR and the decrease in the crystallization temperature for the tested materials. However, the fiber morphology shows no oil permeability that would be seen in SEM micrographs. The structure and size of the fibers changed, as shown in Fig. 1. Mechanical testing Mechanical tests results showed an increase in the maximum tensile strength of PHBV fibers with the increased content of EPO, see Fig. 4. For PHBV fibers, it was 0.16 AE 0.03 MPa and the maximum strength increased to 0.51 AE 0.09 MPa for PHBV + This journal is © The Royal Society of Chemistry 2022 5% EPO, and to 1.07 AE 0.15 MPa for PHBV + 10% EPO. As the maximum stress increases, the toughness of the membranes also increases from 0.016 MJ m À3 for pure fibers, up to 0.288 MJ m À3 for PHBV with 10% oil. Moreover, oil blends show significantly higher strain values at maximum stress, 3%, 15% and 25% for PHBV, 5% EPO and 10% EPO, respectively. Moreover, the toughness of the material also improved with increasing EPO addition in the blend. The strain at failure, for PHBV fibers and 5% EPO fibers is similar, however, the shape of the characteristic stress-strain curves is completely different. Pure PHBV fibers quickly reach the maximum stress value, and further stretching of the sample causes slow elongation. In the case of samples with oil, in the first stage of stretching, we get the value of 90% of the maximum stress, then the uniaxially stretched fibers are elongated to the maximum, and then the samples break very quickly. Fig. 4 shows examples of the stressstrain function curves for samples, the average mechanical properties are presented in Table 1. The tensile curves of all tested samples are presented in ESI, † in Fig. S2. The different shapes of the curves as a result of the uniaxial stretching of electrospun fibers can be caused by the chemical composition of the polymer solution as well as the physical strengthening of the fibers. 24,40,56,57 Overall, PHB is a brittle material with low toughness. The increased flexibility was achieved by copolymerizing the PHB homopolymer with the HV monomer. 58 The HV monomer is more complex than HB, it has a longer side chain, and the mechanical properties of polymers depend on the polydispersity of the materials. 56 In addition, an important aspect of blends is the strength and type of interfacial bonding in the polymer. 30,59 The binding of the oil to the polymer solution, especially in the case of stretching vibrations of C-H bonds (2800-3000 cm À1 ), can increase the polydispersity of the material, and thus improve the mechanical and thermal properties, 60 including toughness. On the other hand, as a result of the addition of oil to PHBV solution, the crystallinity of the tested materials decreases, but the diameter of the fibers increases, see Fig. 1. The properties of electrospun polymer membranes are determined by the packing density of the fibers in the mesh, the interactions between fibers and the mechanical strength of the individual fibers. 40,57,61,62 As a result of uniaxial stretching, the polymer fibers are oriented in the direction of the applied force, which indicates an increase in stress in the sample. 63 Higher density of fiber packing density, the higher mechanical strength is required to orient the fibers in the membrane. 57 Also, the increase in the diameter of the fibers with the addition of oil contributed to the improvement of their mechanical properties. 64,65 Considering the effect of interactions between fibers in meshes during the tensile testing the addition of oil increased the contact points between fibers, as the diameter of fibers with EPO is larger, see Fig. 1. This demonstrates the improved adhesion between the PHBV + EPO load-bearing fibers. 66,67 The interactions between fibers such as sliding enhance the toughness of electrospun fibers 68 which is clearly presented in stress-strain characteristics in Fig. 4. Characteristic bonds, especially bands from 1720 cm À1 , confirm the presence of oil in the polymer blend. Moreover, the addition of oil influenced the crystallization process and the melting enthalpy of the modified materials. Chemical and morphological changes in the structure of polymer fibers are reflected in the mechanical properties, resulting in an improvement in these properties, compared to those of pure PHBV fibers. Wettability -water contact angle The wettability of PHBV nonwovens with the addition of EPO and pure PHBV fibers was measured from the contact of water . 4 The representative stress-strain curves from the tensile testing of pure PHBV fibers and PHBV fibers with 5% and 10% EPO. angles (Fig. 5A-D). In the case of PHBV fibers, no significant changes in the contact angle were noticed within 15 s of the measurement. The mean value for the PHBV fibers was around 126.1 AE 1.91. The measured water contact angles for pure PHBV fibers were close to the values of the contact angles measured in the previous work. 28,32 Moreover, the water contact angles for pure PHBV fibers remained the same for at least 30 min, so we can conclude that the hydrophobic stability is maintained for a long time. An interesting phenomenon was noticed when testing the contact angle on fibers made of PHBV and EPO blends. The contact angle decreases slightly in the first 5 s for PHBV + 5% EPO from 124.3 AE 1.21 to 117.7 AE 1.11. Then, a drastic decrease in contact angle brings the value to 40.5 AE 1.31 in 12 seconds. However, in the case of PHBV fibers with 10% EPO addition, we can observe a progressive decrease in the contact angle from 114.4 AE 0.91 in the 1st second to 63.4 AE 1.11 in the 12th second of the measurement. It can be seen that modifying the material with oil changed the hydrophobic properties of the material to hydrophilic, which is often observed with multi-component materials. 44,48,62,69 Zeta potential The values of the surface potential as a function of the solution pH change are shown in Fig. 5E. PHBV fibers show a higher potential over the entire measuring range compared to oilmodified nonwovens. In the case of pure PHBV fibers, the isoelectric point was determined at pH = 2.95. The titration curves for EPO polymer blends in the acidic range (from pH 2.5 to 4.5) show identical values, which may prove that oil modification did not affect the surface potential of the sample in a strongly acidic environment. However, as the pH increases (above 4.5), the potential changes between PHBV + 5% EPO and PHBV + 10% EPO fiber blends are apparent, where increasing the modifying additive reduces the potential of the fibrous membrane. The charge accumulated at the boundary of two phaseswater and polymer fibers is driven by two mechanisms. The first one affects the surface charge and is associated with the protonation of functional groups and the deprotonation of acid groups. On the other hand, the second mechanism (formation of interfacial charge) is based on the adsorption of ions from the electrolyte in the basic range. 28,[70][71][72] In the case of the dominance of interfacial charge formation, the titration curves in the alkaline range are rather linear. We also observe a linear relationship in the acid range. The roughness of the surface influences the contact angle, however, the changes in wettability of the fibers with the addition of oil are similar, in both cases, water drops are absorbed, as a result of which the titration curves almost overlap in the acid range, and the surface roughness can be neglected in comparison with oil samples. 73,74 Oil has a lower surface tension than water and it also affects wetting. 75 The oil spreading is important as it indicates the transport properties of the membranes that are used as a patch to deliver oil in a controlled way to the skin. The addition of oil to the PHBV blend is reducing the surface-free energy of electrospun fibers. The wettability of electrospun membranes is affected by both the surface properties of individual fibers and the morphology of the whole mesh. 76 Importantly, the changes in zeta potentials allow us to recognize chemical changes on modified surfaces due to different behaviour of surfaces in the liquid surrounding. 77 Zeta potential depends on the wettability of the materials and can be also used to calculate the surface free energy by using Fowke's hypothesis. 78 The average skin pH in adults is 4.9, while the skin of newborns and the elderly have an increased pH. 79 Dry skin and increased pH values may be associated with a decrease in the filaggrin content. [79][80][81][82] In the case of newborns and infants, the use of acidic creams and low-pH emollients are commonly used to help prevent and alleviate the symptoms of atopic dermatitis. [83][84][85] Oil spreading test The spreading area of evening primrose oil (EPO) is shown in Fig. 6A. The graph shows the size of the spreading area from the top of the sample and from the bottom. It can be seen that the oil spreading area does not change with time for each sample. The PHBV sample has the largest area of EPO spreading and the difference between the top and bottom area is slightly more than 10 mm 2 . The oil spreading area on both the PHBV sample with 5% and 10% EPO addition is much smaller than that for pure PHBV fibers, it is in the range of 30-45 mm 2 . The difference in the area between the top and the bottom decreases as the amount of EPO added to the PHBV blend increases. It is worth noting that for the blends the area of spreading on the sample surface from the top is smaller than for the bottom. The smaller changes between the top and the bottom of the fibers indicate more oil delivery through the electrospun PHBV + EPO membranes to the bottom. These results indicate that the electrospun membranes PHBV + EPO loaded with additional oil between fibers can be very effective as patches for delivering moisturizing ingredients for skin. Another aspect is the higher wettability of PHBV + EPO membranes. On the other hand, the morphology of PHBV fibers with EPO changed affecting the porosity of the patches. One type of oil was used in the study, but other studies reported that the type of oil, its viscosity, did not affect the spreading of the oil on the electrospun fibers. 23,75 However, the size of the fibers in the mat is an important factor. 23,24,75 Electrospun PHBV fibers and polymer blend fibers with oil are characterized by an increase in the fiber diameter along with the amount of added oil. Therefore, the largest area of diffusion can be observed for pure PHBV fibers and a smaller area for oily fibers. Water vapor and heat transmission rate The results from the WVTR are shown in Fig. 6B. The materials without indirect oil application to the sample surface showed similar measurement values. In the case of the reference material and gases, we can observe a slight decrease in WVTR when comparing the materials with and without oil. Subsequently, a decrease of about 30% and 40% was observed for PHBV fibers with the addition of oil for 10% and 5% EPO, respectively. The lowest value of the index, a decrease of 50%, was observed for pure PHBV fibers. Additionally, thermal imaging presented in Fig. 6C and D shows that the fibers are a barrier to heat transfer. Immediately after the oil is dropped on PHBV patches, the temperature of the wet fibers on the skin become similar, however, after 3 hours of wearing, the area at the place of the fibers with oil sprinkled on them is lower than that of the dry fibers with pipetted oil droplets. Additionally, Fig. S4 in the ESI, † included a thermal image of the forearms before applying and after removing the patches. Electrospun membranes have very high porosity, therefore the permeability through them is high. 86,87 It is especially observed for the PHBV samples without oil on the surface, thus dry patches. On the other hand, the WVTR values for samples with oil on the surface show lower values, so less water vapor is able to diffuse through the membrane. Therefore, these materials may be suitable for the retention of moisture for atopic skin. It is extremely important to ensure adequate permeability and humidity for the dressing material. 87,88 PHBV fibers, as well as its blends, are suitable for such applications, as the WVTR for a conventional wound dressing is about 2500-3000 g m À2 . 89,90 In the case of atopic dermatitis, the dryness of the skin increases, so the patches should reduce water loss from the skin layer. This is ensured by samples with additionally applied oil on the surface. Moreover, the lowest WVTR was achieved for PHBV fibers with oil applied to the surface. This result can be related to the oil spreading test in electrospun membranes. For PHBV samples the oil spreading area was the highest (Fig. 6A), therefore the highest spreading on PHBV fibers resulted in the lowest water loss. Skin test of electrospun patches In Fig. 7, we present all the results of skin hydration before and after 6 h of applying the patch. The images of the tested patches on the skin of volunteers are also presented in ESI, † in Fig. S5. The application of the electrospun membranes on the skin was easy in all cases. Generally, the oil-applied patches have significantly higher skin hydration than those with control samples. In the case of control patches, skin hydration increased by a maximum of 5%. On the other hand, hydration increases by 15-20% for fibrous patches with additional EPO applied. Although the results from all samples impregnated with oil are similar, the highest moisture increase was demonstrated by PHBV + 5% EPO fibers. Applying the oil to the patches, both from pure PHBV fibers and EPO blends, clearly indicates that these materials increase skin hydration. Previous studies carried out with the use of electrospun fibers PCL, 24 PVB, 23,75 and PI 91 confirm the effectiveness of the use of patches with oil to moisturize the skin. This type of dressing can be crucial in preventing dry skin, which is one of the problems faced by people suffering from atopic dermatitis. 92 What is more, the patches used to moisturize can prevent skin scratching and external infections in the broken layer of the epidermis. Maintaining proper skin hydration is extremely important in supporting the regeneration of damaged skin as well as preventing itching and treating inflammation. 92,93 Conclusions These research studies confirm that as a result of mixing a biodegradable polymer with evening primrose oil, we are able to produce electrospun fibers with a smooth and uniform structure. The chemical characteristics of the tested materials confirm the presence of oil in the electrospun fibers. The increasing size of the fibers with the addition of EPO increased the number of adhesion points in the electrospun membrane, as a result, the mechanical properties were significantly improved, especially the extension and toughness of the overall dressing. Moreover, the addition of oil influenced the crystallization and the enthalpy of melting of the modified materials too enhancing the mechanical performance of PHBV + EPO membranes as well. Apart from the application and handling advantages of blending PHBV with EPO, we observed a significant improvement in skin hydration once the patches were soaked in evening primrose oil and applied on volunteers in the in vivo tests. The oil transport through the membrane was faster for the PHBV + EPO samples as their hydrophilicity was improved. Importantly, the water vapor transmission rate flow rate for PHBV patches with oil was reduced as the addition of pipetted oil blocked the pores in the membrane, thus contributing to the retention of skin moisture. The PHBV patches with blended oil are desired for thermally insulating dressing decreeing the water lost from the skin. We have proven the great potential of applying blend electrospinning patches of PHBV with oils for increasing need in skin protecting and comforting, which is enhanced by growing skin diseases worldwide in relation to the current climate changes and environmental pollution. Conflicts of interest There are no conflicts to declare.
8,573.6
2022-03-07T00:00:00.000
[ "Materials Science" ]
A Robust AdaBoost . RT Based Ensemble Extreme Learning Machine Extreme learningmachine (ELM) has beenwell recognized as an effective learning algorithmwith extremely fast learning speed and high generalization performance. However, to deal with the regression applications involving big data, the stability and accuracy of ELM shall be further enhanced. In this paper, a new hybrid machine learning method called robust AdaBoost.RT based ensemble ELM (RAE-ELM) for regression problems is proposed, which combined ELM with the novel robust AdaBoost.RT algorithm to achieve better approximation accuracy than using only single ELM network. The robust threshold for each weak learner will be adaptive according to theweak learner’s performance on the corresponding problemdataset.Therefore, RAE-ELMcould output the final hypotheses in optimally weighted ensemble of weak learners. On the other hand, ELM is a quick learner with high regression performance, which makes it a good candidate of “weak” learners. We prove that the empirical error of the RAE-ELM is within a significantly superior bound. The experimental verification has shown that the proposed RAE-ELM outperforms other state-ofthe-art algorithms on many real-world regression problems. Introduction In the past decades, computational intelligence methodologies are widely adopted and have been effectively utilized in various areas of scientific research and engineering applications [1,2].Recently, Huang et al. introduced an efficient learning algorithm, named extreme learning machine (ELM), for single-hidden layer feedforward neural networks (SLFNs) [3,4].Unlike conventional learning algorithms such as backpropagation (BP) methods [5] and support vector machines (SVMs) [6], ELM could randomly generate the hidden neuron parameters (the input weights and the hidden layer biases) before seeing the training data, and could analytically determine the output weights without tuning the hidden layer of SLFNs.As the random generated hidden neuron parameters are independent of the training data, ELM can reach not only the smallest training error but also the smallest norm of output weights.ELM overcomes several limitations in the conventional learning algorithms, such as local minimal and slow learning speed, and embodies very good generalization performance. As a popular and pleasing learning algorithm, massive variants of ELM have been investigated in order to further improve its generalization performance.Rong et al. [7] proposed an online sequential fuzzy extreme learning machine (OS-Fuzzy-ELM) for function approximation and classification problems.Cao et al. [8] combined the voting based extreme learning machine [9] with online sequential extreme learning machine [10] into a new methodology, called voting based online sequential extreme learning machine (VOS-ELM).In addition, to solve the two drawbacks in the basic ELM, namely, the over-fitting problem and the unstable accuracy, Luo et al. [11] presented a novel algorithm, called sparse Bayesian extreme learning machine (SB-ELM), which estimates the marginal likelihood of the output weights automatically pruning the redundant hidden nodes.What is more, to overcome the limitations of supervised learning algorithms, according to the theory of semisupervised learning [12]. Although ELM has good generalization performances for classification and regression problems, how to efficiently perform training and testing on big data is challenging for ELM as well.As a single learning machine, although ELM is quite stable compared to other learning algorithms, its classification and regression performance may still be slightly varied among different trails on big dataset.Many researchers sought for various ensemble methods that integrate a set of ELMs into a combined network structures, and verified that they could perform better than using individual ELM.Lan et al. [13] proposed an ensemble of online sequential ELM (EOS-ELM), which is comprised of several OS-ELM networks.The mean of the OS-ELM networks' outputs was used as the performance indicator of the ensemble networks.Liu and Wang [14] presented an ensemble-based ELM (EN-ELM) algorithm, where the cross-validation scheme was used to create an ensemble of ELM classifiers for decision making.Besides, Xue et al. [15] proposed a genetic ensemble of extreme learning machine (GE-ELM), which adopted genetic algorithms (GAs) to produce a group of candidate networks first.According to a specific ranking strategy, some of the networks were selected to ensemble a new network.More recently, Wang et al. [16] presented a parallelized ELM ensemble method based on M 3 -network, called M 3 -ELM.It could improve the computation efficiency by parallelism and solve imbalanced classification tasks through task decomposition. To learn the exponentially increased number and types of data with high accuracy, the traditional learning algorithms may tend to suffer from overfitting problem.Hence, a robust and stable ensemble algorithm is of great importance.Dasarathy and Sheela [17] firstly introduced an ensemble system, whose idea is to partition the feature space using multiple classifiers.Furthermore, Hansen and Salamon [18] presented an ensemble of neural networks with a plurality consensus scheme to obtain far better performance in classification issues than approaching using single neural networks.After that, the ensemble-based algorithms have been widely explored [19][20][21][22][23].Among the ensemble-based algorithms, Bagging and Boosting are the most prevailing methods for training neural network ensembles.The Bagging (short for Bootstrap Aggregation) algorithm randomly selects bootstrap samples from cardinality original training set ( ⊂ ), and then the diversity in the bagging-based ensembles is ensured by the variations within the bootstrapped replicas on which each classifier is trained.By using relatively weak classifiers, the decision boundaries measurably vary with respect to relatively small perturbations in the training data.As an iterative method presented by Schapire [20] for generating a strong classifier, boosting could achieve arbitrarily low training error from an ensemble of weak classifiers, each of which can barely do better than random guessing.Whereafter, a novel boosting algorithm, called the adaptive boosting (AdaBoost), was presented by Schapire and Freund [21].The AdaBoost algorithm makes improvement to traditional boosting methods in two perspectives.One is that the instances thereof are drawn into subsequent subdatasets from an iteratively updated sample distribution of the same training dataset.AdaBoost replaces randomly subsamples by weighted versions of the same training dataset which could be repeatedly utilized.The training dataset is therefore not required to be very large.Another is to define an ensemble classifier through combination of weighted majority voting of a set of weak classifiers, where voting weights are based on classifiers' training errors. However, many of the existing investigations on ensemble algorithms focus on classification problems.The ensemble algorithms on classification problems, unfortunately, cannot be directly applied on regression problems.Regression methods could provide predicated results through analyzing historical data.Forecasting and predication are important functional requirements for real-world applications, such as temperature prediction, inventory management, and positioning tracking in manufacturing execution system.To solve regression problems, based on the AdaBoost algorithm on the classification problem [24][25][26], Schapire and Freund [21] extended AdaBoost.M2 to AdaBoost.R.In addition, Drucker [27] proposed AdaBoost.R2 algorithm, which is based on ad hoc modification of AdaBoost.R. Besides, Avnimelech and Intrator [28] presented the notion of weak and strong learning and an appropriate equivalence theorem between them so as to improve the boosting algorithm in regression issues.What is more, Solomatine and Shrestha [29,30] proposed a novel boosting algorithm, called as AdaBoost.RT.AdaBoost.RT projects the regression problems into the binary classification domain which could be processed by AdaBoost algorithm while filtering out those examples with the relative estimation error larger than the preset threshold value. The proposed hybrid algorithm, which combines the effective learner, ELM, with the promising ensemble method, AdaBoost.RT algorithm, could inherit their intrinsic properties and shall be able to achieve good generalization performances for dealing with big data.Same as the development effort on general ensemble algorithms, the available ELM ensembles algorithms are mainly aimed at the classification problems, while the regression problems with ensemble algorithm have received relatively little attention.Tian and Mao [31] presented an ensemble ELM based on modified AdaBoost.RT algorithm (modified Ada-ELM) in order to predict the temperature of molten steel in ladle furnace.The novel hybrid learning algorithm combined the modified AdaBoost.RT with ELM, which possesses the advantage of ELM and overcomes the limitation of basic AdaBoost.RT by self-adaptively modifiable threshold value.The threshold value Φ need not be constant; instead, it could be adjusted using a self-adaptive modification mechanism subjected to the change trend of the predication error at each iteration.The variation range of threshold value is set to be [0, 0.4], as suggested by Solomatine and Shrestha [29,30].However, the initial value of Φ is manually fixed to be the mean of the variation range of threshold value, ex.Φ 0 = 0.2, according to an empirical suggestion.When one error rate is smaller than that in previous iteration, −1 , the value of Φ will decrease and vice versa.Hence, such empirical suggestion based method is not fully self-adaptive in the whole threshold domain.Moreover, the manually fixed initial threshold is not related to the properties of input dataset and the weak learners, which make the ensemble ELM hardly reach a generally optimized learning effect.This paper presents a robust AdaBoost.RT based ensemble ELM (RAE-ELM) for regression problems, which combined ELM with the robust AdaBoost.RT algorithm.The robust AdaBoost.RT algorithm not only overcomes the limitation of the original AdaBoost.RT algorithm (original Ada-ELM), but also makes the threshold value of Φ adaptive to the input dataset and ELM networks instead of presetting.The main idea of RAE-ELM is as follows.The ELM algorithm is selected as the "weak" learning machines to build the hybrid ensemble model.A new robust AdaBoost.RT algorithm is proposed to utilize the error statistics method to dynamically determine the regression threshold value rather than via manual selection which may only be ideal for very few regression cases.The mean and the standard deviation of the approximation errors will be computed at each iteration.The robust threshold for each weak learner is defined to be a scaled standard deviation.Based on the concept of standard deviation, those individual training data with error exceeding the robust threshold are regarded as "flaws in this training process" and shall be rejected.The rejected data will be processed in the late part of weak learners' iterations. We then analyze the convergence of the proposed robust AdaBoost.RT algorithm.It could be proved that the error of the final hypothesis output by the proposed ensemble algorithm, ensemble , is within a significantly superior bound.The proposed robust AdaBoost.RT based ensemble extreme learning machine can avoid overfitting because of the characteristic of ELM.ELM can tend to reach the solutions straightforwardly, and the error rate of regression outcome at each training process is much smaller than 0.5.Therefore, the proposed robust AdaBoost.RT based ensemble extreme learning machine selecting ELM as the "weak" learner can avoid overfitting.Moreover, as ELM is a fast learner with quite high regression performance, it contributes to the overall generalization performance of the robust AdaBoost.RT based ensemble module.The experiment results have demonstrated that the proposed robust AdaBoost.RT ensemble ELM (RAE-ELM) has superior learning properties in terms of stability and accuracy for regression issues and have better generalization performance than other algorithms. This paper is organized as follows.Section 2 gives a brief review of basic ELM.Section 3 introduces the original and the proposed robust AdaBoost.RT algorithm.The hybrid robust AdaBoost.RT ensemble ELM (RAE-ELM) algorithm is then presented in Section 4. The performance evaluation of RAE-ELM and its regression ability are verified using experiments in Section 5. Finally, the conclusion is drawn in the last section. Brief on ELM Recently, Huang et al. [3,4] proposed novel neural networks, called extreme learning machines (ELMs), for single-hidden layer feedforward neural networks (SLFNs) [32,33].ELM is based on the least-square method which could randomly assign the input weights and the hidden layer biases, and then the output weights between the hidden nodes and the output layer can be analytically determined.Since the learning process in ELM can take place without iterative tuning, the ELM algorithm could trend to reach the solutions straightforwardly without suffering from those problems including local minimal, slow learning speed, and overfitting. From the standard optimization theory point of view, the objective of ELM in minimizing both the training errors and the outputs weights can be presented as [4] Minimize: where where is the vector of the weights between the hidden layer and the th output node and = [ 1 , . . ., ]. is the regularization parameter representing the trade-off between the minimization of training errors and the maximization of the marginal distance. According to KKT theorem, we can obtain different solutions as follows. The ELM output function is A kernel matrix for ELM is defined as follows: Then, the ELM output function can be as follows: In the special case, a corresponding kernel (x , x ) is used in ELM, instead of using the feature mapping h(x) Mathematical Problems in Engineering which need be known.We call (x , x ) = h(x )h(x ) ELM random kernel, where the feature mapping h(x) is randomly generated. (2) Nonkernel Case.Similarly, based on KKT theorem, we have In this case, the ELM output function is The Proposed Robust AdaBoost.RT Algorithm We first describe the original AdaBoost.RT algorithm for regression problem and then present a new robust Ada-Boost.RT algorithm in this section.The corresponding analysis on the novel algorithm will also be given. The Original AdaBoost.RT Algorithm. Solomatine and Sherstha proposed AdaBoost.RT [29,30], a new boost algorithm for regression problems, where the letters and represent regression and threshold, respectively.The original AdaBoost.RT algorithm is described as follows. Learning steps (iterate while ⩽ ) are as follows. Step 1.Call weak learner, providing it with distribution . Output the following: the final hypotheses: The AdaBoost.RT algorithm projects the regression problems into the binary classification domain.Based on the boosting regression estimators [28] and BEM [34], the AdaBoost.RT algorithm introduces the absolute relative error (ARE) to demarcate samples as either correct or incorrect predictions.If the ARE of any particular sample is greater than the threshold Φ, the predicted value for this sample is regarded as the incorrect predictor.Otherwise, it is remarked as correct predictor.Such indication method is similar to the "misclassification" and "correct-classification" labeling used in classification problems.The algorithm will assign relatively large weights to those weak learners in the front of learner list that reach high correct prediction rate.The samples with incorrect prediction will be handled as ad hoc cases by the followed weak learners.The outputs from each weak learner are combined as the final hypotheses using the corresponding computed weights. AdaBoost.RT algorithm requires manual selection of threshold Φ, which is a main factor sensitively affecting the performance of committee machines.If Φ is too small, very few samples will be treated as correct predictions which will easily get boosted.It requires the followed learners to handle a large number of ad hoc samples and make the ensemble algorithm unstable.On the other hand, if Φ is too large, say, greater than 0.4, most of samples will be treated as correct predictions where they fail to reject those false samples.In fact, it will cause low convergence efficiency and overfitting.The initial AdaBoost.RT and its variant suffer the limitation in setting threshold value, which is specified either as a manually specified constant value or a variable changing in vicinity of 0.2.Both of their strategies are irrelevant to the regression capability of the weak learner.In order to determine the Φ value effectively, a novel improvement of AdaBoost.RT is proposed in the following section. The Proposed Robust AdaBoost.RT Algorithm. To overcome the limitations suffered by the current works on AdaBoost.RT, we embed the statistics theory into the AdaBoost.RT algorithm.It overcomes the difficulty to optimally determining the initial threshold value and enables the intermediate threshold values to be dynamically selfadjustable according to the intrinsic property of the input data samples.The proposed robust AdaBoost.RT algorithm is described as follows. (1) Call weak learner, WL t , providing it with distribution: where is a normalization factor chosen such that () will be a distribution.Output the final hypotheses: At each iteration of the proposed robust AdaBoost.RT algorithm, the standard deviation of the approximation error distribution is used as a criterion.In the probability and statistics theory, the standard deviation measures the amount of variation or dispersion from the average.If the data points tend to be very close to the mean value, the standard deviation is low.On the other hand, if the data points are spread out over a large range of values, a high standard deviation will be resulted in. Standard deviation may be served as a measure of uncertainty for a set of repeated predictions.When deciding whether predictions agree with their correspondingly true values, the standard deviation of those predictions made by the underlined approximation function is of crucial importance: if the averaged distance from the predictions to the true values is large, then the regression model being tested probably needs to be revised.Because the sample points that fall outside the range of values could reasonably be expected to occur, the prediction accuracy rate of the model is low. In the proposed robust AdaBoost.RT algorithm, the approximation error of th weak learner, WL t , for an input dataset could be represented as one statistics distribution with parameters ± , where stands for the expected value, stands for the standard deviation, and is an adjustable relative factor that ranges from 0 to 1.The threshold value for WL t is defined by the scaled standard deviation, .In the hybrid learning algorithm, the trained weak learners are assumed to be able to generate small prediction error ( < 1/2).For all , ∈ (0 − , 0 + ), lim → 0, and = 1, . . ., , where denotes a small error limit approaching zero.The population mean of a regression error distribution is closer to the targeted zero than elements in the population.Therefore, the means of the obtained regression errors are fluctuating around zero within a small range.The standard deviation, , is solely determined by the individual samples and the generalization performance of the weak learner WL t .Generally, is relatively large such that most of the outputs will be located within the range [− , + ], which tends to make the boosting process unstable.To maintain a stable adjusting of the threshold value, a relative factor is applied on the standard deviation, , which results in the robust threshold, .For those samples that fall within the threshold range [− , + ], they are treated as "accepted" samples.Other samples are treated as "rejected" samples.With the introduction of the robust threshold, the algorithm will be stable and resistant to noise in the data.According to the error rate of each weak learner's regression model, each weak learner WL t will be assigned with one accordingly computed weight . For one regression problem, the performances of different weak learners may be different.The regression error distributions for different weak learners under robust threshold are shown in Figure 1. In Figure 1(a), the weak learner WL t generates a regression error distribution with large error rate, where the standard deviation is relatively large.On the other hand, another weak learner WL t+j may generate an error distribution with small standard deviations as shown in Figure 1(b).Suppose red triangle points represent "rejected" samples, whose regression error rate is greater than the specified robust threshold, = ∑ ∈ () , where = { | | () − | > }, ∈ [1, ], as described in Step (4) of the proposed algorithm.The green circular points represent those "accepted" samples, which own regression errors less than the robust threshold.The weight vector for every "accepted" sample will be dynamically changed for each weak learner WL t , while that for "rejected" samples will be unchanged.As illustrated in Figure 1, in terms of stability and accuracy, the regression capability of weak learner WL t+j is superior to WL t .The robust threshold values for WL t+j and WL t are computed respectively, where the former is smaller than the latter, to discriminate their correspondingly different regression performances.The proposed method overcomes the limitation suffered by the existing methods where the threshold value is set empirically.The critical factor used in the boosting process, the threshold, becomes robust and self-adaptive to the individual weak learners' performance on the input data samples.Therefore, the proposed robust AdaBoost.RT algorithm is capable to output the final hypotheses in optimally weighted ensemble of the weak learners. In the following, we show that the training error of the proposed robust AdaBoost.RT algorithm is as bounded.One lemma needs to be given in order to prove the convergence of this algorithm.Theorem 2. The improved adaptive AdaBoost.RT algorithm generates hypotheses with errors 1 , 2 , . . ., < 1/2.Then, the error of the final hypothesis output by this algorithm is bounded above by Proof.In this proof, we need to transform the regression problem → into binary classification problems {, } → {0, 1}.In the proposed improved adaptive AdaBoost.RT algorithm, the mean of errors () is assumed to be closed to zero.Thus, the dynamically adaptive thresholds can ignore the mean of errors. Let The final hypothesis output makes a mistake on sample only if The final weight of any sample is Combining ( 13) and ( 14), the sum of the final weights is bounded by the sum of the final weights of rejected samples.Consider where the ensemble is the error of the final hypothesis output. Based on Lemma 1, Mathematical Problems in Engineering 7 Combining those inequalities for = 1, . . ., , the following equation could be obtained: Combining ( 15) and ( 17), we obtain that Considering that all factors in multiplication are positive, the minimization of the right hand side could be resorted to compute the minimization of each factor individually. could be computed as = /(1 − ) when setting the derivative of the th factor to be zero.Substitute this computed into (18), completing the proof. Unlike the original AdaBoost.RT and its existent variants, the robust threshold in the proposed AdaBoost.RT algorithm is determined and could be self-adaptively adjusted according to the individual weak learners and data samples.Through the analysis on the convergence of the proposed robust AdaBoost.RT algorithm, it could be proved that the error of the final hypothesis output by the proposed ensemble algorithm, ensemble , is within a significantly superior bound.The study shows that the robust AdaBoost.RT algorithm proposed in this paper can overcome the limitations existing in the available AdaBoost.RT algorithms. A Robust AdaBoost.RT Ensemble-Based Extreme Learning Machine In this paper, a robust AdaBoost.RT ensemble-based extreme learning machine (RAE-ELM), which combines ELM with the robust AdaBoost.RT algorithm described in previous section, is proposed to improve the robustness and stability of ELM.A set of number of ELMs is adopted as the "weak" learners.In the training phase, the RAE-ELM utilizes the proposed robust AdaBoost.RT algorithm to train every ELM model and assign an ensemble weight accordingly, in order that each ELM achieves corresponding distribution based on the training output.The optimally weighted ensemble model of ELMs, ensemble , is the final hypothesis output used for making prediction on testing dataset.The proposed RAE-ELM is illustrated as follows in Figure 2. Initialization. For the first weak learner, ELM 1 is supplied with training samples with the uniformed distribution of weights in order that each sample owns equal opportunity to be chosen during the first training process for ELM 1 . Distribution Updating. The relative prediction error rates are used to evaluate the performance of this ELM.The prediction error of th ELM, ELM t , for the input data samples could be represented as one statistics distribution, ± , where stands for the expected value, and is defined as robust threshold ( stands for the standard deviation, and the relative factor is defined as ∈ (0, 1)).The robust thresholdis applied to demarcate prediction errors as "accepted" or "rejected." If the prediction error of one particular sample falls into the region ± that is bounded by the robust thresholds, the prediction of this sample is regarded as "accepted" for ELM t and vice versa for "rejected" predictions.The probabilities of the "rejected" predictions are accumulated to calculate the error rate .ELM attempts to achieve the () with small error rate.The robust AdaBoost.RT algorithm will calculate the distribution for next ELM t+1 .For every sample that is correctly predicted by the current ELM t , the corresponding weight vector will be multiplied by the error rate function .Otherwise, the weight vector remains unchanged.Such process will be iterated for the next ELM t+1 till the last learner ELM T , unless is higher than 0.5.Because once the error rate is higher than 0.5, the AdaBoost algorithm does not converge and tends to overfitting [21].Hence, the error rate must be less than 0.5. Decision Making on RAE-ELM. The weight updating parameter is used as an indicator of regression effectiveness of the ELM t in the current iteration.According to the relationship between and , if increases, will become larger as well.The RAE-ELM will grant a small ensemble weight for the ELM t .On the other hand, the ELM t with relatively superior regression performance will be granted with a larger ensemble weight.The hybrid RAE-ELM model combines the set of ELMs under different weights as the final hypothesis for decision making. Performance Evaluation of RAE-ELM In this section, the performance of the proposed RAE-ELM learning algorithm is compared with other popular algorithms on 14 real-world regression problems covering different domains from UCI Machine Learning Repository [35], whose specifications of benchmark datasets are shown in Table 1.The ELM based algorithms to be compared include basic ELM [4], original AdaBoost.RT based ELM (original Ada-ELM) [30], the modified self-adaptive AdaBoost.RT ELM (modified Ada-ELM) [31], support vector regression (SVR) [36], and least-square support vector regression (LS-SVR) [37].All the evaluations are conducted in Matlab environment running on a Windows 7 machine with 3.20 GHz CPU and 4 GB RAM. In our experiments, all the input attributes are normalized into the range of [−1, 1], while the outputs are normalized into [0, 1].As the real-world benchmark datasets are embedded with noise and their distributions are unknown, which are of small sizes, low dimensions, large sizes, and high dimensions, for each trial of simulations, the whole data set of the application is randomly partitioned into training dataset and testing dataset with the number of samples shown in Table 1.25% of the training data samples are used as the validation dataset.Each partitioned training, validation, and testing dataset will be kept fixed as inputs for all these algorithms. Training Training Training Boosting The final hypothesis Sequence of M samples Distribution Distribution (x 1 , y 1 ), . .., (x m , y m ) Calculate the error rate 𝜀 1 Calculate the error rate For RAE-ELM, basic ELM, original Ada-ELM, and modified Ada-ELM algorithms, the suitable numbers of hidden nodes of them are determined using the preserved validation dataset, respectively.The sigmoid function (a, , x) = 1/(1 + exp(−(a⋅x+))) is selected as the activation function in all the algorithms.Fifty trails of simulations have been conducted for each problem, with training, validation, and testing samples randomly split for each trail.The performances of the algorithms are verified using the average root mean square error (RMSE) in testing.The significantly better results are highlighted in boldface. Model Selection. In ensemble algorithms, the number of networks in the ensemble needs to be determined.According to Occam's Razor theory, excessively complex models are affected by statistical noise, whereas simpler models may capture the underlying structure better and may thus have better predictive performance.Therefore, the parameter, , which is the number of weak learners need not be very large. We define in (12) as = 0.5 − , where > 0 is constant, it results in where KL is the Kullback-Leibler divergence. We then simplify (19) by using 0.5 − instead of 0.5 − , that is, each is set to be the same.We can get For RAE-ELM, the number of ELM networks need be determined.The number of ELM networks is set to be 5, 10, 15, 20, 25, and 30 in our simulations, and the optimal parameter is selected as the one which results in the best average RMSE in testing. Besides, in our simulation trails, the relative factor in RAE-ELM is the parameter which needs to be optimized within the range ∈ (0, 1).We start simulations for at 0.1 and increase them at the interval of 0.1.Table 2 shows the examples of setting both and for our simulation trail. As illustrated in Table 2 and Figure 3, RAE-ELM with sigmoid activation function could achieve good generalization performance for Parkinson disease dataset as long as the number of ELM networks is larger than 15.For a given number of ELM networks, RMSE is less sensitive to the variation of and tends to be smaller when is around 0.5.For a fair comparison, we set RAE-ELM with = 20 and = 0.5 in the following experiments.For both original Ada-ELM and modified Ada-ELM, when the number of ELM networks is less than 7, the ensemble model is unstable.The number of ELM networks is also set to be 20 for both original Ada-ELM and modified Ada-ELM. We use the popular Gaussian kernel function (u, v) = exp(−‖u − v‖ 2 ) in both SVR and LS-SVR.As is known to all, the performances of SVR and LS-SVR are sensitive to the combinations of (, ).Hence, the cost parameter and the kernel parameter need to be adjusted appropriately in a wide range so as to obtain good generalization performances.For each data set, 50 different values of and 50 different values of , that is, 2500 pairs of (, ), are applied as the adjustment parameters.The different values of and are {2 −24 , 2 −23 , . . ., 2 24 , 2 25 }.In both SVR and LS-SVR, the best performed combinations of (, ) are selected for each data set as presented in Table 3. For basic ELM and other ELM-based ensemble methods, the sigmoid function (a, , x) = 1/(1 + exp(−(a ⋅ x + ))) is selected as the activation function in all the algorithms.The parameters (, ) need be selected so as to achieve the best generalization performance, where the cost parameter is selected from the range {2 −24 , 2 −23 , . . ., 2 24 , 2 25 } and the different values of the hidden nodes are {10, 20, . . ., 1000}. In addition, for the original AdaBoost.RT-based ensemble ELM, the threshold Φ should be chosen before seeing the datasets.In the original AdaBoost.RT-based ensemble ELM, the threshold Φ is required to be manually selected according to an empirical suggestion, which is a sensitive factor affecting the regression performance.If Φ is too low, then it is generally difficult to obtain a sufficient number of "accepted" samples.However if Φ is too high, some wrong samples are treated as "accepted" ones and the ensemble model tends to be unstable.According to Shrestha and Solomatine's experiments, the threshold Φ shall be defined between 0 and 0.4 in order to make the ensemble model stable [30].In our simulations, we incrementally set thresholds within the range from 0 to 0.4.The original Ada-ELM with threshold values at {0.1, 0.15, . . ., 0.35, 0.4} could generate satisfied results for all the regression problems, where the best performed original Ada-ELM is shown in boldface.What is more, the modified Ada-ELM algorithm needs to select an initial value of Φ 0 to calculate the followed thresholds in the iterations.Tian and Mao suggested setting the default initial value of Φ 0 to be 0.2 [31].Considering that the manually fixed initial threshold is not related to the characteristics of ELM prediction effect on input dataset, the algorithm may not reach the best generalization performance.In our simulations, we compare the performances of different modified Ada-ELMs at correspondingly different initial threshold values Φ 0 set to be {0.1, 0.15, . . ., 0.3, 0.35}.The best performed modified Ada-ELM is also presented in Table 3 as well. Performance Comparisons between RAE-ELM and Other Learning Algorithms.In this subsection, the performance of the proposed RAE-ELM is compared with other learning algorithms, including basic ELM [4], original Ada-ELM [30], and modified Ada-ELM [31], support vector regression [36], and least-square support vector regression [37].The results comparisons of RAE-ELM and other learning algorithms for real-world data regressions are shown in Table 4. Table 4 lists the averaging results of multiple trails of the four ELM based algorithms (RAE-ELM, basic ELM, original Ada-ELM [30], and modified Ada-ELM [31]), SVR [36], and LS-SVR [37] for fourteen representative real-world data regression problems.The selected datasets include large scale of data and small scale of data, as well as high dimensional data problems and low dimensional problems.It is easy to find that averaged testing RMSE obtained by RAE-ELM for all the fourteen cases are always the best among these six algorithms.For original Ada-ELM, the performance is sensitive to the selection of threshold value of Φ.The best performed original Ada-ELM models for different regression problems own their correspondingly different threshold values.Therefore, the manual chosen strategy is not good.The generalization performance of modified Ada-ELM, in general, is better than original Ada-ELM.However, the empirical suggested initial threshold value at 0.2 does not ensure a mapping to the best performed regression model.The three AdaBoost.RT based ensemble ELMs (RAE-ELM, original Ada-ELM, and modified Ada-ELM) all perform better than the basic ELM, which verifies that an ensemble ELM using AdaBoost.RT can achieve better predication accuracy than using individual ELM as the predictor.The averaged generalization performance of basic ELM is better than SVR while it is slightly worse than LS-SVR. To find the best performed original Ada-ELM model or modified Ada-ELM for a regression problem, as the input dataset and ELM networks are not related to the threshold selection, the optimal parameter need be searched by bruteforce.One needs to carry out a set of experiments with different (initial) threshold values and then searches among them for the best ensemble ELM model.Such process is time consuming.Moreover, the generalization performance of the optimized ensemble ELMs using original Ada-ELM or modified Ada-ELM can hardly be better than that of the proposed RAE-ELM.In fact, the proposed RAE-ELM is always the best performed learner among the six candidates for all fourteen real-world regression problems. Conclusion In this paper, a robust AdaBoost.RT based ensemble ELM (RAE-ELM) for regression problems is proposed, which combined ELM with the novel robust AdaBoost.RT algorithm. Combing the effective learner, ELM, with the novel ensemble method, the robust AdaBoost.RT algorithm could construct a hybrid method that inherits their intrinsic properties and achieves better predication accuracy than using only individual ELM as predictor.ELM tends to reach the solutions straightforwardly, and the error rate of regression prediction is, in general, much smaller than 0.5.Therefore, selecting ELM as the "weak" learner can avoid overfitting.Moreover, as ELM is a fast learner with quite high regression performance, it contributes to the overall generalization performance of the ensemble ELM.The proposed robust AdaBoost.RT algorithm overcomes the limitations existing in the available AdaBoost.RT algorithm and its variants where the threshold value is manually specified, which may only be ideal for a very limited set of cases.The new robust AdaBoost.RT algorithm is proposed to utilize the statistics distribution of approximation error to dynamically determine a robust threshold.The robust threshold for each weak learner WL t is self-adjustable and is defined as the scaled standard deviation of the approximation errors, .We analyze the convergence of the proposed robust AdaBoost.RT algorithm.It has been proved that the error of the final hypothesis output by the proposed ensemble algorithm, ensemble , is within a significantly superior bound. The proposed RAE-ELM is robust with respect to the difference in various regression problems and variation of approximation error rates that do not significantly affect its highly stable generalization performance.As one of the key parameters in ensemble algorithm, threshold value does not need any human intervention; instead, it is able to be self-adjusted according to the real regression effect of ELM networks on the input dataset.Such mechanism enable RAE-ELM to make sensitive and adaptive adjustment to the intrinsic properties of the given regression problem. The experimental result comparisons in terms of stability and accuracy among the six prevailing algorithms (RAE-ELM, basic ELM, original Ada-ELM, modified Ada-ELM, SVR, and LS-SVR) for regression issues verify that all the AdaBoost.RT based ensemble ELMs perform better than the SVR, and, more remarkably, the proposed RAE-ELM always achieves the best performance.The boosting effect of the proposed method is not significant for small sized and low dimensional problems as the individual classifier (ELM network) could already be sufficient to handle such problems well.It is worth pointing out that the proposed RAE-ELM has better performance than others especially for high dimensional or large sized datasets, which is a convincing indicator for good generalization performance. Figure 3 : Figure 3: The average testing RMSE with different values of and in RAE-ELM for Parkinson disease dataset. , ] is the training error vector of the output nodes with respect to the training sample x .According to KKT theorem, training ELM is equivalent to solving the dual optimization problem: Different regression error distributions for different weak learners under robust threshold.(a) Error distribution of weak learner WL t that results in a large threshold.(b) Error distribution of weak learner WL t+j that results in a small threshold. Table 1 : Specification of real-world regression benchmark datasets. Table 2 : Performance of proposed RAE-ELM with different values of and for Parkinson disease dataset. Table 3 : Parameters of RAE-ELM and other learning algorithms. Table 4 : Result comparisons of testing RMSE of RAE-ELM and other learning algorithms for real-world data regression problems.
8,607.2
2015-05-26T00:00:00.000
[ "Computer Science" ]
The Role of Intermittent Hypoxia on the Proliferative Inhibition of Rat Cerebellar Astrocytes Sleep apnea syndrome, characterized by intermittent hypoxia (IH), is linked with increased oxidative stress. This study investigates the mechanisms underlying IH and the effects of IH-induced oxidative stress on cerebellar astrocytes. Rat primary cerebellar astrocytes were kept in an incubator with an oscillating O2 concentration between 20% and 5% every 30 min for 1–4 days. Although the cell loss increased with the duration, the IH incubation didn’t induce apoptosis or necrosis, but rather a G0/G1 cell cycle arrest of cerebellar astrocytes was noted. ROS accumulation was associated with cell loss during IH. PARP activation, resulting in p21 activation and cyclin D1 degradation was associated with cell cycle G0/G1 arrest of IH-treated cerebellar astrocytes. Our results suggest that IH induces cell loss by enhancing oxidative stress, PARP activation and cell cycle G0/G1 arrest in rat primary cerebellar astrocytes. Introduction Intermittent hypoxia (IH) is defined as repeated episodes of hypoxia interspersed with episodes of normoxia [1]. Although beneficial effects of IH pre-conditioning in subsequent lethal hypoxia in mice had been reported [2], the link between IH and several adverse events such as hypertension, developmental defects, neuropathological problems and sleep apnea syndrome have not been examined. Sleep apnea is a major public health problem because of its high prevalence and severe life-threatening consequences [3]. Obstructive sleep apnea (OSA), manifested as periodic decreases of arterial blood oxygen or intermittent hypoxia (IH), is the most prevalent type of sleep apnea. Patients with OSA have increased risk of cardiovascular diseases and neuro-cognitive deficits [4,5]. Magnetic resonance imaging studies in OSA patients have revealed significant size-reductions in multiple sites of the brain, including the cortex, temporal lobe, anterior cingulated, hippocampus, and cerebellum [6]. Reoxygenation (therapy) of OSA increases the risk of oxidative stress and cell injury. Oxidative stress results primarily from excessive ROS, including superoxide (O 2 − ‧), hydrogen peroxide (H 2 O 2 ), and the hydroxyl radical (OH‧) [7]. Cells exposed to excessive oxidative stress are often subject to unfolded protein response, DNA damage and cell death. DNA damages usually results in, Poly (ADP-ribose) polymerase (PARP) activation, triggering the progression of the cell cycle to facilitate DNA repair [8,9]. In case of severe DNA damage, the over-activation of PARP will lead to NAD + /ATP-depletion necrosis or AIF-mediated apoptosis [9,10]. Increasing levels of ROS are also associated with the IH-induced CNS dysfunction. Astrocytes are dynamic cells that maintain the homeostasis of CNS, and establish and maintain the CNS boundaries, including the blood-brain barrier (BBB) and the glial limitans, through interactions with endothelial and leptomeningeal cells, respectively [11]. Several reports have suggested that astrocytes promote remyelination and the formation of new synapses and neurons through the release of neurotrophic factors [12,13]. Astrocytes (star-shaped cells) are involved in the physical structuring of the brain. They are the most abundant glial cells in the brain that are closely associated with neuronal synapses [14], and they regulate the transmission of electrical impulses within the brain. Glial cells are also involved in providing neurotrophic signals to neurons required for their survival, proliferation, and differentiation [15]. In addition, reciprocal interactions between glia and neurons are essential for many critical functions in brain health and disease. Glial cells play pivotal roles in neuronal development, activity, plasticity, and recovery from injury [16]. The idea that astrocytes have active roles in the modulation of neuronal activity and synaptic neurotransmission is now widely accepted [17]. This study evaluates the effects of IH-induced oxidative stress on rat cerebellar astrocytes cell loss, as well as the underlying pathways involved in these processes. We show ROS accumulation and PARP activation in IH-induced cell loss in rat cerebellar astrocytes. We further demonstrate PARP and p21 activation play roles in IH-induced cell cycle arrest and proliferation inhibition. Primary cultures of rat cerebellar astrocytes All procedures were performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the Tzu Chi University. The protocol was approved by the Institutional of Animal Care and Use Committee (IACUC) of the Tzu Chi University (Permit Number: 96062). All efforts were made to minimize animal suffering. In brief, astrocyte cultures were prepared from the cerebella of 7-day-old SD rats (of either sex), as described in our previous studies [18,19]. The cerebellum was dissected and dissociated by mechanical chopping, and then trypsinized to obtain cell suspension. Cells were grown on 12 mm-diameter coverslips and maintained in 5% CO 2 -/95% humidified air at 37°C. The culture medium was basal modified Eagle's medium (BMEM), supplemented with 10% fetal calf serum (FCS), and penicillin/streptomycin. Most of the cells remaining after 7 to 10 days of culturing of were astrocytes and were ready to be used for the following experiments. IH exposure IH exposure was performed as described [20]. Cerebellar astrocytes were placed in Plexiglas box chambers (length 20 cm, width 20 cm, height 8 cm) and exposed to normoxia (RA; 20% O 2 , 5% CO 2 , and balance N 2 ) or intermittent hypoxia (IH; 5% O 2 , 5% CO 2 , and balance N 2 for 30 min alternating with 30-min to RA) using a timed solenoid valve controlled by DO-166MT-1SXS (Shelfscientific, USA) for 1-4 days. Oxygen levels in the chamber were continuously monitored using an oxygen detector. MTT assay Cell viability after treatment with various conditions was evaluated using the MTT assay preformed in triplicate. Briefly, cells (2 x 10 5 /well) were incubated in a 3.5 cm petri dish containing 2 ml of serum-containing medium. Cells were allowed to adhere for 18-24 h and then were washed with phosphate-buffered saline (PBS). After treatment for the indicated condition, cells were washed with PBS, and culture medium containing 300 μg/ml MTT was added for 1 h at 37°C. After the MTT medium was removed, 2 ml of DMSO were added to each well. Absorbance at 570 nm was detected by a Multiskan EX ELISA Reader (Thermo Scientific, Rockford, IL). The absorbance for control group cells was considered to be 100%. Cell cycle analysis The cell cycle was determined by flow cytometry using DNA staining dye to reveal the total amount of DNA. Cells were harvested with 0.25% trypsin/EDTA, then collected, washed with PBS, fixed with cold 70% ethanol for 1 h, and stained with a solution containing 20 μg/ml propidium iodide (PI), 0.2 mg/ml RNase A, and 0.1% Triton X-100 for 1 h in the dark. The cells were then analyzed using a FACScan flow cytometer (equipped with a 488-nm argon laser) to measure the DNA content. The data were obtained and analyzed with CellQuest 3.0.1 (Becton Dickinson, Franklin Lakes, NJ) and ModFitLT V2.0 software. Immunocytochemistry Cells cultured on coverslips were treated with various conditions and fixed with cold 4% paraformaldehyde. The fixed cells were washed twice in PBS, and incubated in a cold permeabilization solution (0.15% Triton X-100) for 5 min. Cells were washed with PBS and incubated with 5% non-fat milk at room temperature for 10 min. First antibodies were incubated at 4°C overnight. The cells were washed with PBS three times and then incubated with FITC or TRITCconjugated secondary antibody for 1 h at room temperature. The cells were then washed with PBS three times and counterstained with 300 nM Hoechst 33342 for 10 min. Images were obtained with a confocal microscope (TCS-SP, Leica). TUNEL assay Cells were subjected to IH or RA for the indicated time and then examined for apoptosis using the TUNEL assay (In Situ Cell Death Detection Kit, Roche). Western blotting Cells were lysed on ice with 200 μl of lysis buffer (50 mM Tris-HCl, pH 7.5, 0.5 M NaCl, 5 mM MgCl2, 0.5% Nonidet P-40, 1 mM phenylmethylsulfonyl fluoride, 1 μg/ml pepstatin, and 50 μg/ml leupeptin) and centrifuged at 10000 x g at 4°C for 10 min. The protein concentrations in the supernatants were quantified using a BSA Protein Assay Kit. Electrophoresis was performed on a NuPAGE Bis-Tris Electrophoresis System using 20 μg of reduced protein extract per lane. Resolved proteins were then transferred to PVDF membranes. Membranes were blocked with 5% non-fat milk for 1 h at room temperature and then probed with the appropriate dilution of primary antibodies at 4°C overnight: β-actin (chemicon). p21, cyclin D1 (cell signaling). After the PVDF membrane was washed three times with TBS/0.2% Tween 20 at room temperature, it was incubated with the appropriate secondary antibody (goat anti-mouse or anti-rabbit, 1:10000) and labeled with horseradish peroxidase for 1 h at room temperature. All proteins were detected using Western Lightning Chemiluminescence Reagent Plus (Amersham Biosciences, Arlington Heights, IL). Confocal microscopy Cells were observed using a laser scanning confocal microscope (TCS-SP, Leica). Images were analyzed using the microscope's bundled software. Statistics The results of fluorescence measurements and cell proliferation experiments are expressed as the mean ± SEM. The t-test and one-way ANOVA with post-hoc test were performed to test differences between groups using SPSS 18.0 software (SPSS Taiwan Corp.). All tests were considered to be statistically significant when p < 0.05. Intermittent hypoxia (IH) accelerated the cell loss of rat cerebellar astrocytes in vitro To elucidate the effects of intermittent hypoxia on rat cerebellar astrocytes, cells were cultured in an RA (normoxia) or IH (intermittent hypoxia) chamber for 1 to 4 days. After RA or IH incubation, cells were fixed and investigated by immuno-staining. Astrocytes were stained with anti-GFAP (Gilial fibrillay acidic protein, green fluorescence), and nuclei were stained with Hoechst dye (blue fluorescence) (Fig 1A). The cell number of astrocytes cultured in 5% CO 2 on day 0 was set at as 100%. RA1~RA4 respectively represent the cell counts of astrocytes in normoxia from days 1 to 4. IH1~IH4 respectively represent the cell counts of astrocytes in IH from days 1 to 4. Cell loss due to IH was not related to apoptosis or necrosis in rat cerebellar astrocytes To clarify the roles of apoptosis and necrosis in IH-induced cell loss, astrocytes were incubated in RA or IH chambers for 4 days and analyzed using the TUNEL assay and PI immuno-staining (Fig 2A). Astrocytes treated with 100 μM H 2 O 2 were used as the positive control. There were no TUNEL-positive (green fluorescence) cells in the RA4 or IH4 groups as compared to the H 2 O 2 group (Fig 2A, upper panel). There were also no PI-positive (red fluorescence) cells in RA4 or IH4 group as compared to the H 2 O 2 group (Fig 2A, lower panel). In addition, when the sub-G1 cells of various treatments were analyzed using flow cytometry, no significant increase of the sub-G1 cell population was found in the RA4 or IH4-treated groups (Fig 2B). These results suggest that IH incubation-induced cell loss didn't correlate with apoptosis or necrosis induction in rat cerebellar astrocytes. IH induced G0/G1 phase arrest in rat cerebellar astrocytes The effect of IH on cell cycle progression was also examined: Flow cytometric analysis showed that IH resulted in the accumulation of cells in G0/G1 phase arrest (Fig 3A). Treatment of cells with IH for 3 and 4 days increased the percentage of cells in the G0/G1 phase to 81.09 ± 0.33% and 80.82 ± 0.35%, respectively, as compared to the control group (78.64 ± 0.54%, Fig 3B). To further examine the underlying mechanism of the G0/G1 arrest caused by IH, the expression of the cell cycle regulatory protein, p21, was examined. Immuno-staining showed that the fluorescence intensity of p21 was increased higher in the IH groups (both IH3 & IH4) than in the RA control ( Fig 3C). The relative intensities of the fluorescence were IH3 (125.40 ± 8.01%) and IH4 (146.73 ± 5.68%), as compared to the RA control astrocytes (Fig 3D). The upregulation of p21 by IH in astrocytes was further validated by western blotting. We also found that the upregulation of p21 concurred with the inhibition of cyclin D expression (Fig 3E). These results suggest that IH induced cell cycle G0/ G1 arrest and might be associated with the activation of p21 in rat cerebellar astrocytes. IH induced ROS accumulation in rat cerebellar astrocytes in vitro Previous studies have suggested that IH may increase ROS in experimental animals and in cell cultures. To further investigate the role of ROS in IH-induced astrocytic cell loss, we first examined the O 2 -• level after IH incubation. Astrocytes were incubated in IH condition for number of days indicated in Fig 4A and stained with DHE to detect intracellular O 2 -• ( Fig 4A). Co-treatment of the astrocytes with 5U/ml superoxide dismutase (SOD) enhanced the O 2 -•to H 2 O 2 , thus reducing the IH-induced fluorescence intensity (IH3+SOD group in Fig 4A lower panel). The respective average fluorescence intensities of DHE staining in IH1 to IH4 groups were 110.4 ± 4.23%, 108.83 ± 6.75%, 129.42 ± 10.21%, and 157.22 ± 16.07%. The fluorescence intensity significantly decreased to 97 ± 4.48% after SOD co-treatment (Fig 4B). OH• level were also examined by DCFDA staining. The fluorescence intensities of the IH3 and IH4 groups were higher than in the control group (Fig 4C). Co-treatment with 100 nM 1,10-Phenanthroline (Phe) decreased the ROS generation in IH3 group (Fig 4C lower panel). The fluorescence intensities increased to 168.44 ± 11.82% and 151.69 ± 9.59% (IH3 and IH4, respectively) and significantly decreased to 124.33 ± 5.99% after Phe co-treatment ( Fig 4D). These data suggest that IH induced ROS (O 2 − • and-OH•) accumulation in rat cerebellar astrocytes in vitro. IH induced PARP activation in rat cerebellar astrocytes in vitro Excessive accumulation of ROS induces oxidative stress and leads to DNA damage. Poly (ADP-ribose) polymerase (PARP) is a family of proteins activated by DNA damage and apoptosis. PARP usually attaches to regions of damaged DNA and catalyzes the synthesis of poly (ADP-ribose) (PAR) chains to itself and adjacent nuclear proteins. PAR thus serves as a signal for other DNA repair enzymes. To elucidate the roles of ROS and DNA damage in the IH-induced cell loss of rat cerebellar astrocytes, we investigated the expression PAR in the nuclei of IH-treated astrocytes. The fluorescence intensity of the PAR chains was dramatically increased in the IH group as compared to the control group (Fig 5A). Astrocytes treated with 100 μM H 2 O 2 served as the positive control. Co-treatment with the PARP inhibitor 3-Aminobenzamide (3-AB, 1 mM) and DPQ (1 mM) diminished the IH-induced PAR expression ( Fig 5B). The fluorescence intensity was quantified and compared to the control (Fig 5C): IH3+ 3-AB (96.31 ± 3.12%), IH3+DPQ (106.78 ± 2.06%), IH4+3AB (105 ± 4.86%), and IH4+DPQ (93.29 ± 7.15%). IH-induced cell cycle arrest was inhibited by anti-oxidants or PARPinhibitors Since IH-induced astrocytic cell loss was rescued by ROS or PARP inhibitors, we further examined their cell cycle profiles after SOD or DPQ treatment. Flow cytometric analysis showed that treatment with SOD or DPQ significantly inhibited IH-induced G0/G1 arrest in astrocytes: IH3 (81.09 ± 0.33%), IH3+SOD (76.75 ± 0.52%), and IH3+DPQ (74.59 ± 0.9%) (Fig 7A). The G0/G1 regulatory proteins were also examined by western blot analysis. ROS or The Role of IH on Rat Cerebellar Astrocytes PARP inhibitors increased the expression level of cyclin D1 and decreased the expression level of p21 (Fig 7B). These results suggest that IH induced cell cycle G0/G1 arrest in rat cerebellar astrocytes through ROS accumulation and PARP activation, and can be partially rescued by ROS or PARP inhibitors. Discussion Our results demonstrate that IH induced oxidative stress in rat cerebellar astrocytes and led to cell loss in vitro. Several previous reports had indicated that chronic IH could elevate oxidative stress and increase apoptosis in mice cortical neurons [21,22]. We previously reported that IH-induced oxidative stress and cell death in rat cerebellar granule cells [20]. However, in rat cerebellar astrocytes, we demonstrated that IH induced G0/G1 cell cycle arrest but not apoptosis or necrosis. We also clarified that cyclin D1 is down-regulated and p21 is up-regulated after IH treatment. Recent studies have revealed that oxidative stress can cause cyclin D1 depletion [7] and cell cycle arrest can serve as a protective mechanism to reduce genotoxic damages from oxidative stress. The expression of cyclin D1 represents an important marker for assessing the integration of proliferative and growth inhibitory effects of oxidants on the redox-dependent signaling events that control cell cycle progression [23]. Reduction in the intracellular levels of ROS by ROS inhibitors can lead to increased expression of cyclin D1, entry in to the G1 phase and progression into the S phase. The p21 protein binds to and inhibits cyclin E/A-CDK2 and cyclin D-CDK4 complexes. The increase in p21 following IH treatment indicated that p21 might play an important role in inhibiting cell cycle progression. Both PARP and ROS inhibitors reduced the expression of p21 and restored the cell cycle progression. IH-induced oxidative stress can lead to DNA damage, PARP activation, and PAR polymerization. Inhibition of oxidative damage by ROS inhibitors reduced PAR polymerization and restored the cell loss induced by IH. Treatment with a PARP inhibitor partially rescued cell loss after IH, and restored the expression of cyclin D1, as well as the progression of the cell cycle. These data suggested that PARP activation was involved in IH-induce astrocytic cell cycle arrest. The cellular oxidation and reduction environment is influenced by the production and removal of ROS [7]. Thus, cellular ROS level could regulate cellular processes including cell proliferation and differentiation. The roles of astrocytes during ischemic injury has recently attracted considerable attention [24]. Astrocytes promote neuronal survival during ischemia by limiting neuronal damage and cell death caused by ROS [25,26], excitotoxins [27] and other stressors. The glutathione system is responsible for the rapid clearance of organic hydroperoxides by astrocytes, along with the defense of the neurons against ROS [28,29]. According to the astrocyte-neuron lactate shuttle hypothesis, astrocytes play a major role in supplying neurons with energy in the form of lactate [30,31]. Monocarboxylate transporter 4 (MCT4) is expressed specifically in astrocytes and is involved in this process. Recent studies have suggested that IH preconditioning can provide protection to neurons against epilepsy through the upregulation of MCT4 expression in astrocytes in vitro and in vivo [32]. The expression level of MCT4 by astrocytes is controlled by oxygen tension via a HIF-1α-dependent mechanism [33]. It has been reported that PARP-1 inhibition reduced the transcriptional activity of HIF-1α [34]. Our results showed that IH-induced astrocyte cell loss can be rescued The Role of IH on Rat Cerebellar Astrocytes by PARP inhibition, however, the MCT4-mediated neuro-protection effect might be diminished by PARP inhibition, and this should be examined in future research. The Role of IH on Rat Cerebellar Astrocytes
4,341.4
2015-07-14T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Self-gravitating anisotropic model in general relativity under modified Van der Waals equation of state: a stable configuration The purpose of this paper consists in presenting models of compact stars described by a new class of exact solutions to the field equations, in the context of general relativity, for a fluid configuration which is locally anisotropic in the pressure. With current sensitivities, we considered a non-linear form of modified Van der Waals equation of state viz., pr=αρ2+βρ1+γρ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_{r}=\alpha \rho ^{2} +\frac{\beta \rho }{1+\gamma \rho }$$\end{document}, as well as a gravitational potential Z(x) as a generating function by exploiting an anisotropic source of matter which served as a basis for generating the confined compact stars. The exact solutions are formed by correlating an interior space-time geometry to an exterior Schwarzschild vacuum. Then, we analyze the physical viability of the model generated and compare it with observational data of some heavy pulsars coming from the Neutron Star Interior Composition Explorer. The model satisfies all the required pivotal physical and mathematical properties in the compact structures study, offering empirical evidence in support of the evolution of realistic stellar configurations. It is shown to be regular, viable, and stable under the influence generated by the parameters coming from the theory namely, α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}, β\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta $$\end{document}, γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma $$\end{document}, δ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta $$\end{document}, everywhere within the astral fluid in the investigated high-density regime that supports the existence of realistic heavy pulsars such as PSR J0348+0432, PSR J0740+6620 and PSR J0030+0451. Introduction One of the most interesting unsolved issues in modern physics is the attributes of dense nuclear matter. Due to the constraint of terrestrial experiments, regarding the properties of nuclear matter's equation of state (EoS) at supra-saturation densities, that is greater than the nuclear saturation density 2.8 × 10 14 g cm −3 , scientific groups are focusing on investigating compact objects in the cosmos, including mainly white dwarfs (WDs) and neutron stars (NSs). Specifically, NSs are considered to be the best extraterrestrial labs for studying the undetected features of dense matter [1][2][3][4]. Our knowledge with regard to NSs has considerably extended, since the pioneer work of Oppenheimer and Volkoff [5]. Mass, radius and other parameters of stellar objects depend on the EoS selected for the dense matter. Dozens of EoS have been proposed to describe NS matter over the years [6]. Since the EoS is one of the main observables characterizing matter features under intense conditions, hence its constraint, therefore, necessitates integrating nuclear physics and astrophysics. Many theoretical and experimental efforts along with astrophysical observations have been put to probe the properties of dense nuclear matter. Several NSs with a mass about 2 M probed over the last decennary whole enough stringent restraints on nuclear matter EoS. Amongst the most enormous observed pulsars is the pulsar PSR J1614-2230 having the smallest uncertainty on the mass M = 1.906 +0.016 −0.016 M [7]. Other two pulsars with M > 2 M are PSR J0348+0432 with M = 2.01 +0.04 −0.04 M [8] and MSP J0740+6620, recently discovered with a mass of 2.14 +0. 10 −0.09 M [9]. In recent decades, the anisotropy effects on the modeling of relativistic astrophysical objects in strong gravitational fields have already been discussed in some recent works. We can expect the emergence of unequal principal stresses, dubbed anisotropic fluid when modeling such high-density heavenly configurations above the nuclear density. This generally signifies that the radial pressure component is not equal to the components in the tangential direction viz., two dissimilar types of pressure components interior these relativistic astrophysical objects. It is interesting to mention here that the anisotropy effect has been first predicted by Jeans in 1922 [10] for self-gravitating configurations in the Newtonian regime. Then, an engrossing vision about more realistic astrophysical systems where the nuclear interactions must be analyzed in a relativistic way when a stellar structure with density energy ρ > 10 15 g cm −3 was given by Ruderman [11]. In Refs. [11,12], the authors argued that the matter proportion in the extremely congested nucleus of a stellar structure could present unequal stresses. However, the authors [13] have thoroughly investigated the anisotropy sources at the heavenly interior. Thereafter, the authors [14] have evaluated and contended viable underlying cause for local anisotropy in self-gravitating structures using representative cases of both Newtonian and general relativistic circumstances. In the same context, several authors [15][16][17][18] have also been analyzed the source and consequences of local anisotropy on cosmic configurations. It is also worth mentioning here that in order to explore the the local pressure anisotropy effect on a well-defined basis, it is mandatory to know the substantial physical grounds accountable for its semblance, such as, e.g., the pion condensation, exotic stage transitions over gravitational crash [19,20], viscosity [21], presence of a strong nucleus or the existence of a type-IIIA super-critical fluid [22], heavy electromagnetic areas [23][24][25], slow turning of a fluid [26], emergence of willing distortion of Fermi surfaces [27,28], availability of super-critical fluid states with constrained Cooper pair orbital momentum [29][30][31][32], or constrained super-critical fluid momentum [33,34]. The EoS, which is the principal input to the Tolman-Oppenheimer-Volkoff equations [5,35], establishes the stable stages of a non-rotating NS, are constructed in diverse fashions. The non-relativistic formalism with some parametrizations of Skyrme [36] and the three-body potential of Akmal-Pandheripande-Ravenhall [37] are highly successful in portraying nuclear EoS, including the NS. Moreover, many exact stellar solutions to the Einstein field equations were obtained by various methods with generalized pathways for one of the metric potentials that does have a linear EoS [38][39][40][41][42], a quadratic EoS [43], a polytropic EoS [44,45], a Chaplygin EoS [46][47][48][49][50][51][52] and Van der Waals EoS [53][54][55] etc., and without a specific barotropic EoS linking pressure to energy density [56][57][58][59][60][61]. Despite the fact that many such works have been published over the years, only a small num-ber of these stellar solutions are compatible with non-singular metric functions via a physically agreeable stress-energy tensor. In this paper, we study a new class of solutions to Einstein's field equations representing static spherically symmetric anisotropic matter distribution in terms of a specified form of modified Van der Waals EoS such as p r = αρ 2 + βρ 1+γρ along with a gravitational potential Z (x) as a generating function. The exact solutions are formed by correlating an interior space-time geometry to an exterior Schwarzschild vacuum. Then, we study the physical viability of the model generated and compared with observational constraints from some massive NSs reported in the literature such as the millisecond pulsars PSR J0348+0432 [8], PSR J0740+6620 [9] and PSR J0030+0451 [62]. The paper is organized as follows. In Sect. 2, we briefly discuss the basic principles of an equivalent system of equations by using the Durgapal-Bannerji transformation, to represent an anisotropic static spherically symmetric matter distribution. In Sect. 3, we provide new classes of exact interior stellar solutions. Section 4 presents an insight of intersection circumstances for a sleek corresponding between intrinsic and extrinsic geometries, whereas Sects. 5, 6 and 7 discuss physical properties, validity, and stability. Finally, concluding remarks are reported in Sect. 8. Spherically symmetric space-time Our motive in this study is to discuss a model describing an anisotropic matter distribution with static spherical symmetry in terms of a boosting function obeying a Van der Waals type EoS. For this purpose, we start with static spherically symmetric spacetime that can be represented by the line element in Schwarzschild coordinates x a = (t, r, ϑ, ϕ). Here ν and λ represent the gravitational potentials which are only functions of r , and d 2 2 = dϑ 2 + sin 2 ϑ dϕ 2 portrays the metric on the two-sphere in polar coordinates. . The Einstein field equation is defined as, where T i j and G i j describe the energy-momentum tensor for matter distribution and Einstein's tensor, respectively. Here the Einstein's tensor G i j depends completely on the Ricci tensor R i j and the Ricci scalar R. This can be expressed as where g i j representing the metric tensor. Let's suppose that the matter implicated in the distribution is anisotropic in kind. By using the entire function, one thus obtains the function for energy-momentum tensor in the accompanying shape: whereas η j is the fluid 4-speed and η j η i = χ i χ j = 1, χ i is the unit space-like vector and thus η j χ i = 0. The above equation (4) gives the components of an anisotropic fluid's energy-momentum tensor at any point in the form of density ρ, radial pressure p r and transverse pressure p t . In this regard, the energy-momentum tensor T j i along with a simple form of line element can be expressed as with For the line element (1) and energy-momentum tensor (5), the system of Einstein field equations in relativistic units 8π G = c = 1, can be expressed as This stellar system of Eqs. (7)-(9) portrays the evolution of the gravitational field within an anisotropic celestial configuration. The gravitational mass contained within the spherical object of radius r is given by, while is an integration constant. We now use the transformation proposed for the first time in Ref. [63] The stellar system of Einstein field equations expressed in (7)-(9) becomes The gravitational mass expression (10) becomes in terms of x introduced in (11). It is interesting to observe that a physically realistic fluid distribution of matter expecting to fulfill the barotropic EoS viz., p r = p r (ρ). In this concern, we consider that the interior matter distribution obeys the modified Van der Waals EoS as follows, in order to successfully complete the stellar system of Eqs. (12)- (14). Here α, β and γ are real parameters. The decelerated and accelerated periods are determined by parameters, α, β and γ of the EoS, and in the restricting situation α, γ → 0, we can recover the dark energy EoS, with β = p r /ρ < −1/3. It has also been pointed out that the perfect fluid EoS p r = βρ represents an estimation of cosmic epochs portraying stationary circumstances, with phase transitions ignored [64]. Consequently, the modified Van der Waals model has the advantage of depicting the transition from a matter field ruled era to a scalar field ruled epoch without introducing scalar fields. Furthermore, it aids in the clarification of the cosmos by using a small number of ingredients, and the modified Van der Waals fluid definitely treats dark energy and dark matter as a single fluid. By restricting the free parameters [64], the modified Van der Waals scenario was also effectively challenged with a wide range of observational tests. On the other hand, this type of modified Van der Waals EoS expressed in (16) seems less economical and in comparison to observational tests, it is more flexible because of the wide number of free parameters. Next, it is conceivable to write the stellar system of Eqs. (12)- (14) in the simplest shape in terms of gravitational potential g rr i.e., Z , while in our stellar model the amount = p t − p r is the anisotropy test that provides with C is an integration constant. Consequently, the line element (1) can be expressed in terms of the new variables defined in (11) as follows, Therefore, the solution representing static spherically symmetric anisotropic matter distribution with the specified form of modified Van der Waals EoS can be readily established in accordance with the generating gravitational potential Z (x). Next, we discuss in detail how we build compact stellar configurations with anisotropic matter. Exact solutions for anisotropic compact heavenly structures In the system of Einstein field equations expressed in (17)- (22), there are six independent equations with independent variables namely, ρ, p r , p t , , y and Z . On the one hand, we can see that the stellar system of equations strongly depends on the gravitational potential Z (x). On the other hand, the system proposes that it is conceivable to define one of the amounts implicated in the integration process from equation (22) which is the master equation in the present study whose solution given by the relation (23). For this purpose, we make an explicit choice for the gravitational potential Z (x) in the following form where δ is a positive real parameter. Here Z (x) = 1 at x → 0, which shows that for a broad range of values of the parameter δ, the form of gravitational potential has been found to be regular, positive, and non-singular at the origin, as well as well-behaved in the stellar interior, and thus satisfies all of the requirements leading to the solution's main physical acceptability. Now, on substituting (25) into (23), we get the explicit function of y which is where Consequently, the exact model for the stellar system of Eqs. (17)-(22) composed of energy density, radial and tangential components of pressure is obtained as follows then, using (22) and (25), we obtain the explicit form of the anisotropic parameter as follows where y is specified by the previously mentioned relationship (26). If < 0, the anisotropic factor is attractive in nature and repulsive if > 0. Matching conditions for anisotropic solution At this stage, the interior space-time is smoothly connected to the vacuum exterior Schwarzschild space-time at the stellar surface r = R s , and it is obvious that R s > 2M, while R s and M are the radius and total mass of the star, respectively. In this case, the line element for the stellar configuration at the junction surface with radius r = R s has the form Here, the total mass is denoted by M. Nevertheless, the following requirements must be met at the hyper-surface in order to ensure the smoothness and continuity of the inward space-time metric ds 2 − and the outside space-time ds 2 + at the boundary surface. [ and The interior and exterior spacetimes are represented by − and +, respectively, while the curvature is described by K i j . By using the continuity of the first fundamental form, which is [ds 2 ] =0, we can always get for any function F(r ). Furthermore, this arrangement provides us with, Following that, the spacetime (1) must achieve the second fundamental form, K i j , at the hyper-surface , which is equivalent to the O Brien and Synge junction condition [72]. In this context, we discovered that the radial pressure at the surface should be zero, i.e., when r = r , leading to The size of the stellar structure is determined by this requirement. Alternatively, − and + are being used to symbolize the interior and exterior sectors, respectively. The hyper-surface is then represented by the accompanying line element, The proper time boundary is denoted by τ . In this perspective, the boundary's extrinsic curvature can be written as, with n i denoting the coordinates in the boundary , and η ± k denoting the four-speed normal to . The components of this four-speed are obtained using the coordinates (y ν ± ) of τ ± as follows, The interior and exterior sector unit normal vectors can then be written as Then, utilizing the line elements (1) and (39) in conjunction with the Schwarzschild spacetime (33), we can formulate where [r ] = R s . Eq. (42) can be used to derive the non-zero components of the curvature (K i j ) as follows, Therefore, when we combine the junction condition When the preceding statement is inserted into the matching condition [K − 00 ] = [K + 00 ] , it produces the following result, Therefore, at the hyper-surface, the relevant criteria supplied by Eqs. (43)- (45) give rise to the following expressions, It is clear to observe that the condition (46) does not impose any constraints on the parameters, whereas the condition (47) imposes a constraint on the parameter A as Due to the complicatedness of the static and spherically symmetric solutions for the system of the field equations, we exhibit graphically that the radial reliance of our stellar system's physical quantities, which includes matter variables of the anisotropic model are well-behaved throughout the interior of the stellar configuration and hence the stellar model satisfies all necessary conditions. Fig. 1 Behaviour of the matter density ρ and the radial and transverse pressures ( p r , p t ) against the radial coordinate r of our stellar model for three heavy pulsars. For plotting these graphs, we use the numerical values of the constant parameters given in Table 1 5 Physical analysis We now proceed to discuss the physical acceptability of the stellar solutions acquired in this study. We will look at various physical features of compact stellar object formations and we demonstrate that acquired solutions are physically viable. We have considered the observational data of three compact astrophysical objects viz., PSR J0348+0432, PSR J0030+0451 and PSR J0740+6620 as models in order to show the anisotropic effects presented with spherical symmetry within space-time metric in the context of general relativity. The graphs were drawn by selecting parameter values as follows after comprehensive empirical fine-tuning: δ, α, β and γ for some heavy pulsars as shown in Table 1. The election of parameters have been such that the anisotropic stellar models are physically reasonable fulfilling the following physical requirements: • Necessary criteria for matter density and pressure components: From Fig. 1 we see that the density and pressure profiles are all monotonic decrease smoothly towards the surface layer of the stellar configurations, having their maximum values at the stellar center and the radial pressure p r are disappearing at the boundary of each stellar configuration r = R s . At the star's surface layer, however, matter density is always positive. The central density due to ordinary matter is evaluated as follows, Then, the central pressure for our present stellar model is acquired as follows, These informations immediately indicates that both density and pressure are non-negative within the interior of compact stellar configuration. On the other hand, as also seen in Fig. 1, at the stellar boundary, the transverse pressure is greater than zero, which is physically feasible [65]. Moreover, an anisotropic fluid scenario has been clearly stated by the supposition of particles in movement on circular orbits [66,67] and the transverse pressure of a surface layer is related to surface tension [68]. As it is clear from Fig. 1 that the anisotropy profile is a monotonically increasing function as one moves from the stellar centre towards the stellar boundary remaining finite and continuous in the interior and repulsive in nature. • The continuity of the extrinsic curvature via the corresponding hyper-surface: Continuity of the extrinsic curvature via the corresponding hyper-surface, at the surface layer of the stellar configuration r = R s gives the condition which yields We can obtain the positive radius R s by selecting appropriate parameters α, β, γ and δ. • The positivity of the energy conditions: The energy conditions are fundamental tools for GR since they permit us to analyze the casual and geodesic structure of space-time carefully. One path to deriving such conditions is through the Raychaudhuri equations [73][74][75], which define the action of correspondence of the gravity for timelike, spacelike, or lightlike curves. If we are working with an anisotropic fluid, the energy conditions, i.e., Strong energy conditions (SEC), Dominant energy conditions (DEC), Weak energy conditions (WEC), Trace energy conditions (TEC), and Null energy conditions (NEC) for GR are expressed as: where k = r, t. The non-negative profile of state variables ρ, p r and p t shown in Fig. 1 swiftly adheres to the first three constraints i.e., NEC infers that an observer traversing a null scheme will measure the typical matter density as non-negative, according to WEC, the matter density measured by an observer crossing a time-like scheme is constantly non-negative, and with regard to SEC, the trace of the tidal tensor analyzed by the corresponding observers is always non-negative. The non-negative evolutionary associated with DEC an TEC is also consistent with the fourth and fifth constraints, in which the DEC represents the mass-energy that will never be seen to flow faster than light and according to TEC, the stress-energy tensor trace should be necessarily non-negative depending on metric conventions. It is obvious from Fig. 2, that all energy conditions are carefully verified, resulting in a non-exotic matter content and are well-satisfied with the constraints of the realistic stellar configurations, which corroborate that our stellar model is well-behaved and describes an acceptable physical system. • Cracking method for anisotropic compact sphere stability: It is expected that the speed of sound will be less than the speed of light within a stellar interior, i.e., the square of radial (v 2 sr = dp r dρ ) and transverse (v 2 st = dp t dρ ) speeds of sound must fulfill the inequalities 0 ≤ v 2 sr ≤ 1 and 0 ≤ v 2 st ≤ 1 which is known as a causality condition. From Fig. 3 we can evidently see that throughout the interior of the stellar structures, the radial and transverse speeds of sound are always less than the speed of light c = 1, indicating that the causality condition is satisfied. Moving towards the stellar boundary, we discover that the difference in sound velocity decreases but causality is never violated and cracking will not occur in all our cases. Gravitational mass, compactness factor and gravitational red-shift The compactness factor of our stellar model is defined by a dimensionless parameter u which is the mass-to-radius ratio and it cannot be arbitrarily huge. As claimed by Buchdahl [70], the compactness factor of a stellar system for a fourdimensional fluid sphere should be smaller than 2M R < 8 9 ≈ 0.8888 to be a stable configuration. So, to come up with the compactness factor, in this section, we are fascinated to investigate the gravitational mass function for our stellar model which can be given as It should be noted here that the gravitational mass function is influenced by δ. From the above gravitational mass formula, the compactness factor can be calculated as Therefore, the gravitational red-shift of our present model correlating to the stated compactness factor is defined as follows, The profile of the gravitational mass function, the compactness factor and the gravitational red-shift are illustrated in Fig. 4. The figure represents the three quantities viz., m (r ), u (r ) and Z (x) being monotonic increasing functions with the radial coordinate r and positive within the stellar system, as well as the regularity of the gravitational mass function at the origin, is ensured. With increasing gravitational mass function, the compactness factor increases, and their corresponding value u satisfies the maximum allowable mass-toradius ratio of Buchdahl [70], i.e., it cannot be greater than 8/9. According to the authors [56][57][58][59][60][61]71], the surface gravitational red-shift for an anisotropic fluid sphere should be less than Z s ≤ 5 or Z s ≤ 5.211. Based on these constraints, our current stellar system reveals that Z s ≤ 0.301219, indicating that cosmic structures are available. The generated upshots corresponding to the physical parameters viz., R, ρ c , ρ s , p c , 2M/R, Z s , along with the constant parameters viz., α, β, γ and δ are illustrated in Tables 1, and 2. From these numerical upshots, we confirm that the chosen stellar objects have the values of high-redshift and their surface densities are greater than the nuclear saturation, 2.8 × 10 14 g cm −3 . Consequently, for the stellar model parameters selected, the generated solutions comply with the requirements of realistic stellar configurations and are in good agreement with reported physical quantities determined by existing astronomical observations on some heavy pulsars PSR J0348+0432 [8], PSR J0740+6620 [9] and PSR J0030+0451 [62]. Fig. 2 Behaviour of the energy conditions against the radial coordinate r of our stellar model for three heavy pulsars. For plotting these graphs, we use the numerical values of the constant parameters given in Table 1 Fig . 3 Behaviour of the square of radial and transverse speeds of sound (v 2 r , v 2 t ) and the difference between the square of radial and transverse speeds of sound against the radial coordinate r of our stellar model for three heavy pulsars. For plotting these graphs, we use the numerical values of the constant parameters given in Table 1 . 4 Behaviour of the gravitational mass (m(r )), compactness parameter (u(r )), and gravitational red-shift (z s ) against the radial coordinate r of our stellar model for three heavy pulsars. For plotting these graphs, we use the numerical values of the constant parameters given in Table 1 Fig. 5 Behaviour of the M − ρ c and M-R diagrams of our stellar model for three heavy pulsars. For plotting these graphs, we use the numerical values of the constant parameters given in Table 1 Static stability criterion and mass-radius diagram In this section, we start by studying the static stability criterion developed by Chandrasekhar [76] to analyze the stability of stellar structures under radial disturbances. Furthermore, Harrison et al. [77] and and Zeldovich and Novikov [78] simplify this static stability criterion by imposing the following constraints: To illustrate, we computed the total mass as a function of ρ c , which is given as Figure 5 shows the variation of mass in accordance with the central density. This leads to the conclusion that increasing ρ c improves stability. This is due to the range central density's preference for saturating the mass. This means that greater values of ρ c improve the stable range of density during radial oscillation. This leads to the conclusion that the stellar solution is stable under radial disturbances. Further, we tested the state of the compact stellar objects by studying the M-R diagram resulting from our stellar model in the Fig. 5. In this regard, we provide a useful description of the effects included by the appropriate parameters α, β, γ and δ, in order to give an efficient and more realistic model. According to the effect of these parameters, we can observe that the maximum value of mass M in [M ] and associated radius R in [km] decreases, resulting in a more compact and less massive stellar system. We also discovered a good agreement, represented by the horizontal stripes with observational data on the M-R diagram for three compact stellar objects, namely, PSR J0740+6620, PSR J0348+0432, PSR J0030+0451, and many others can be matched. Concluding remarks In this paper, we have focused on investigating the possibility of providing a new well-behaved class of exact anisotropic solutions for viable highly compact static spherically symmetric configurations as an alternative to NSs in the context of general relativity. For this purpose, we considered a non-linear form of modified Van der Waals EoS viz., p r = αρ 2 + βρ 1+γρ for the pressure and energy density relationship along with a gravitational potential Z (x) as a generating function via an anisotropic matter distribution which formed the basis for building bounded stellar configurations. The models under consideration are regular, viable, and stable under the influence generated by the parameters coming from the nonlinear EoS and gravitational potential viz., α, β, γ , δ, everywhere within the astral fluid. One captivating observation is that the predicted radii for observed heavy pulsars are readily determined from the continuity of the second fundamental form along with the maximum observed mass and corresponding radii are achieved through fine-tuning of parameters coming from theory. Finally, it is worth mentioning here that the model admits and shares all the required pivotal physical and mathematical attributes in the compact stars study, which provide circumstantial evidence in favor of the evolution of realistic stellar configurations in the investigated high-density regime. In effect, our stellar model supports the existence of realistic heavy pulsars such as PSR J0740+6620, PSR J0348+0432 and PSR J0030+0451. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: This is a theoretical study and the results can be verified from the information available.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .
6,834.8
2022-05-01T00:00:00.000
[ "Physics" ]
2-Ketogluconate Kinase from Cupriavidus necator H16: Purification, Characterization, and Exploration of Its Substrate Specificity We have cloned, overexpressed, purified, and characterized a 2-ketogluconate kinase (2-dehydrogluconokinase, EC 2.7.1.13) from Cupriavidus necator (Ralstonia eutropha) H16. Exploration of its substrate specificity revealed that three ketoacids (2-keto-3-deoxy-d-gluconate, 2-keto-d-gulonate, and 2-keto-3-deoxy-d-gulonate) with structures close to the natural substrate (2-keto-d-gluconate) were successfully phosphorylated at an efficiency lower than or comparable to 2-ketogluconate, as depicted by the measured kinetic constant values. Eleven aldo and keto monosaccharides of different chain lengths and stereochemistries were also assayed but not found to be substrates. 2-ketogluconate-6-phosphate was synthesized at a preparative scale and was fully characterized for the first time. Introduction Rare ketoses have great potential, for instance, as chiral auxiliaries, as sweeteners, or (thanks to their biological properties) in pharmaceutical chemistry [1]. Among them, phosphorylated monosaccharides are of particular interest due to their central role in metabolic pathways [2,3]. Sugar phosphates, having a 2-keto functionality, can be produced by lyases or transferases. More precisely, they can be obtained by a variety of aldolases [4][5][6][7][8][9], a transaldolase [10], or a transketolase [5,[11][12][13][14][15][16][17]. In vivo, phosphorylated monosaccharides are often obtained by direct phosphorylation of the corresponding monosaccharide, catalyzed by an ATP-dependent kinase. Such enzymes have also been efficiently applied for natural or unusual phosphorylated sugar preparation [2,3]. Kinases, as biocatalysts for the production of rare 2-ketoaldonate-phosphates, could also play a key role in the synthetic design of new biologically interesting compounds and enrich the arsenal of biocatalyst compounds. We turned to a bacterial 2-ketogluconate kinase (KGUK; EC 2.7.1.13.) as another and somewhat neglected biocatalyst for the formation of 2-ketoaldonate-6-phosphates. While chemical preparation of 2-ketogluconate-6-phosphate (KGP) has been described by vanadate/NaClO 3 -catalyzed synthesis [18], an enzymatic approach would be of interest, as nontoxic substances are used, and furthermore, it is done in sustainable conditions. KGUK is involved in the glucose and 2-ketogluconate catabolism of several aerobic bacteria, but relatively few bacterial species are able to utilize 2-ketogluconate as the sole carbon source for growth and energy provision. Besides one Gram-positive Leuconostoc mesenteroides [19], the main 2-ketogluconate utilizers are Gram-negative proteobacteria such as Pseudomonas, Aerobacter/Enterobacter/Klebsiella, or 2-ketogluconate kinase was first discovered as an inducible activity in 1953 in Aerobacter cloacae [32] and then in Pseudomonas fluorescens [24]. The enzyme's product, 2-ketogluconate-6-phosphate, was isolated and described at the same time [23]. Later, four other KGUKs were identified from: (i) the Gram-positive L. mesenteroides [19], (ii) Aerobacter aerogenes (nowadays classified as Klebsiella pneumoniae) [25], (iii) Hydrogenomonas eutropha H16 (newer and alternative designations are Ralstonia eutropha H16 or Cupriavidus necator H16) [33], and (iv) P. aeruginosa [20,28]. To the best of our knowledge, however, no studies on the substrate specificity of any bacterial KGUK have been published so far. We focused our attention on the kinase from C. necator, the complete genome sequence of which has been published [34], and we abbreviated this enzyme as KGUKCnec. In this work, we cloned, overexpressed, and purified the recombinant N-terminal his-tagged 2-ketogluconate kinase from C. necator (KGUKCnec) in Escherichia coli. For the first time, its substrate specificity was studied with different commercially available sugars and with various synthetic analogues of the natural substrate 2-ketogluconate. Finally, a preparative scale of the 2-ketogluconate-6-phosphate was performed to demonstrate the synthetic potential of this enzyme. Cloning, Overexpression, Purification, and Characterization of KGUK from C. necator The kguK gene from C. necator strain H16 was cloned by PCR amplification from chromosomal DNA. The protein matched the expected molecular weight of the cloned his-tagged KGUKCnec (35.7 kDa). The analysis of the cell-free extract (CFE) showed good recombinant enzyme production (4800 U per liter) in the soluble fraction. Thanks to its attached 6-histidines tag, the enzyme could be easily purified by Immobilized Metal Affinity Chromatography (IMAC). Starting from 200 mL of expression cell culture (0.85 g of wet weight of cells after sedimentation), 20 mL of CFE were obtained (260 mg of protein with a specific activity of 0.38 U·mg −1 ). After IMAC purification, 4.7 mg of protein were obtained with a specific activity of 8.7 U·mg −1 (Table 1). Final yield of this purification method was 42% and the purification fold was increased to 22.8. The effect of imidazole from IMAC fractions on KGUKCnec activity was evaluated. No activity differences were detected in samples before and after imidazole removal. Actually, imidazole displayed stabilizing properties in the KGUKCnec activity during storage. Indeed, purified enzymes stored at 4 °C in the presence of 0.25 M of imidazole retained 90% of initial activity after one month, whereas protein samples in the 2-ketogluconate kinase was first discovered as an inducible activity in 1953 in Aerobacter cloacae [32] and then in Pseudomonas fluorescens [24]. The enzyme's product, 2-ketogluconate-6-phosphate, was isolated and described at the same time [23]. Later, four other KGUKs were identified from: (i) the Gram-positive L. mesenteroides [19], (ii) Aerobacter aerogenes (nowadays classified as Klebsiella pneumoniae) [25], (iii) Hydrogenomonas eutropha H16 (newer and alternative designations are Ralstonia eutropha H16 or Cupriavidus necator H16) [33], and (iv) P. aeruginosa [20,28]. To the best of our knowledge, however, no studies on the substrate specificity of any bacterial KGUK have been published so far. We focused our attention on the kinase from C. necator, the complete genome sequence of which has been published [34], and we abbreviated this enzyme as KGUK Cnec . In this work, we cloned, overexpressed, and purified the recombinant N-terminal his-tagged 2-ketogluconate kinase from C. necator (KGUK Cnec ) in Escherichia coli. For the first time, its substrate specificity was studied with different commercially available sugars and with various synthetic analogues of the natural substrate 2-ketogluconate. Finally, a preparative scale of the 2-ketogluconate-6-phosphate was performed to demonstrate the synthetic potential of this enzyme. Results and Discussion 2.1. Cloning, Overexpression, Purification, and Characterization of KGUK from C. necator The kguK gene from C. necator strain H16 was cloned by PCR amplification from chromosomal DNA. The protein matched the expected molecular weight of the cloned his-tagged KGUK Cnec (35.7 kDa). The analysis of the cell-free extract (CFE) showed good recombinant enzyme production (4800 U per liter) in the soluble fraction. Thanks to its attached 6-histidines tag, the enzyme could be easily purified by Immobilized Metal Affinity Chromatography (IMAC). Starting from 200 mL of expression cell culture (0.85 g of wet weight of cells after sedimentation), 20 mL of CFE were obtained (260 mg of protein with a specific activity of 0.38 U·mg −1 ). After IMAC purification, 4.7 mg of protein were obtained with a specific activity of 8.7 U·mg −1 (Table 1). Final yield of this purification method was 42% and the purification fold was increased to 22.8. The effect of imidazole from IMAC fractions on KGUK Cnec activity was evaluated. No activity differences were detected in samples before and after imidazole removal. Actually, imidazole displayed stabilizing properties in the KGUK Cnec activity during storage. Indeed, purified enzymes stored at 4 • C in the presence of 0.25 M of imidazole retained 90% of initial activity after one month, whereas protein samples in the absence of imidazole were totally inactive after only one night stored at 4 • C. The addition of possible stabilizers other than imidazole, such as BSA or glycerol, did not increase the stability. The effects of protein freezing and freeze-drying were also unsuccessful on enzyme stabilization. Consequently, IMAC fractions were directly stored after protein purification and imidazole was removed just before each experiment in order to avoid possible chemical interferences. No loss of activity was detected after the desalting procedure, so the specific activity of the final imidazole-free fraction remained the same after IMAC purification. Enzyme Activity The gene kguK from C. necator (NCBI Reference Sequence: YP_841324.1) encodes a putative 2-ketogluconate kinase enzyme (EC 2.7.1.13) in line with earlier biochemical work which had discovered such enzyme activity in the strain H16 (formerly termed Hydrogenomonas) [33]. 2-ketogluconate kinase activity was experimentally confirmed in the recombinant KGUK samples from IMAC purification, which showed a specific activity of 8.7 U/mg. Maximal enzyme activity was observed when a concentration of 1.25 mM of 2-keto-D-gluconate (KG) was employed in the activity assay. When the KG concentration was increased, a slow and continuous drop of the activity was observed (Figure 2A). In addition, the effect of the ATP concentration on the enzyme activity was also examined. Maximum specific activity was found at ATP concentrations of 1.25 mM. However, when the concentration was increased over 5 mM, the enzyme activity drastically decreased, showing strong inhibition by substrate excess ( Figure 3B). Kinase activity showed the Mg 2+ requirement as an enzyme cofactor. As the real donor phosphate substrate is the complex Mg-ATP, it is crucial to use at least the same concentrations of Mg 2+ as the ATP ones, in order to ensure the right enzyme activity in the assayed conditions. The maximal enzyme activity was found at Mg 2+ concentrations of 5 mM when 1.25 mM of ATP was employed. This enzyme displayed a similar specific activity to that of the only one described from A. aerogenes (8.1 U/mg) [25]. Substrate Specificity Substrate specificity of the KGUK from C. necator for the phosphate acceptors was studied on a broad variety of sugars with different chemical structures. Firstly, 11 commercially available aldo and keto sugars were tested: d-glucose, l-glucose, d-fructose, d-psicose, d-tagatose, d-ribulose, d-xylulose, d-sorbose, l-sorbose, d-erythrose, and 2-deoxy-d-ribose, where both chain length and stereochemistry were varied. These compounds were reacted with KGUK and reaction progress was followed by the described spectrometric assay to assess enzyme activity (See Materials and Methods section). None of the 11 selected sugars showed any conversion, revealing they are not substrates for KGUK Cnec in our experimental conditions. The study was then focused on substrates with closer chemical structures to the natural KG (i.e., KGul and KGal). In addition, KGUK specificity for 3-deoxy analogues was also examined ( Figure 4). Indeed, KGUK Cnec displayed some amino acid sequence identity with previously described 2-keto-3-deoxy-d-gluconate kinases (KDGK) belonging to a different kinase family (EC 2.7.1.45) ( Figure 5). Mg 2+ excess concentration of 5 mM (A). To evaluate the effect of higher concentrations of ATP on the enzyme activity, additional assays were performed increasing the Mg 2+ concentration to 25 mM (B). When ATP concentrations over 5 mM were used, a strong decrease in the kinase activity was observed. Substrate Specificity Substrate specificity of the KGUK from C. necator for the phosphate acceptors was studied on a broad variety of sugars with different chemical structures. Firstly, 11 commercially available aldo and keto sugars were tested: D-glucose, L-glucose, D-fructose, D-psicose, D-tagatose, D-ribulose, D-xylulose, D-sorbose, L-sorbose, D-erythrose, and 2-deoxy-D-ribose, where both chain length and stereochemistry were varied. These compounds were reacted with KGUK and reaction progress was followed by the described spectrometric assay to assess enzyme activity (See Materials and Methods section). None of the 11 selected sugars showed any conversion, revealing they are not substrates for KGUKCnec in our experimental conditions. The study was then focused on substrates with closer chemical structures to the natural KG (i.e., KGul and KGal). In addition, KGUK specificity for 3-deoxy analogues was also examined ( Figure 4). Indeed, KGUKCnec displayed some amino acid sequence identity with previously described 2-keto-3-deoxy-D-gluconate kinases (KDGK) belonging to a different kinase family (EC 2.7.1.45) ( Figure 5). Thus, KDGK from Thermus thermophilus displayed 35.9% amino acid sequence identity with the KGUK from C. necator. KDGK catalyzes the ATP-dependent phosphorylation of KDG ( Figure 6), with KDG being the C3 deoxy analogue of KG. Enzymes from this family were also described to be able to catalyze the phosphorylation reaction of KG [35,36]. Nevertheless, there are no data in the literature about KDG as a substrate of KGUKs, so we decided to explore the KGUKCnec activity also using KDG and a C4 epimer (KDGul) as substrates. KGul, KGal, and KDGul were prepared as recently published [37] by using pyruvate aldolases discovered from biodiversity. They were found to be able to use hydroxypyruvate and D-glyceraldehyde as nucleophile and electrophile substrates, respectively. In order to evaluate the catalytic properties of the KGUKCnec toward the five obtained compounds (Figure 4), the kinetic parameters of the enzyme were calculated ( Figure 2B-D). Kinetic parameters for the donor substrate ATP were evaluated as well, and the results are summarized in Table 2. Thus, KDGK from Thermus thermophilus displayed 35.9% amino acid sequence identity with the KGUK from C. necator. KDGK catalyzes the ATP-dependent phosphorylation of KDG ( Figure 6), with KDG being the C3 deoxy analogue of KG. Enzymes from this family were also described to be able to catalyze the phosphorylation reaction of KG [35,36]. Nevertheless, there are no data in the literature about KDG as a substrate of KGUKs, so we decided to explore the KGUK Cnec activity also using KDG and a C4 epimer (KDGul) as substrates. KGul, KGal, and KDGul were prepared as recently published [37] by using pyruvate aldolases discovered from biodiversity. They were found to be able to use hydroxypyruvate and d-glyceraldehyde as nucleophile and electrophile substrates, respectively. In order to evaluate the catalytic properties of the KGUK Cnec toward the five obtained compounds (Figure 4), the kinetic parameters of the enzyme were calculated ( Figure 2B-D). Kinetic parameters for the donor substrate ATP were evaluated as well, and the results are summarized in Table 2. Thus, KDGK from Thermus thermophilus displayed 35.9% amino acid sequence identity with the KGUK from C. necator. KDGK catalyzes the ATP-dependent phosphorylation of KDG ( Figure 6), with KDG being the C3 deoxy analogue of KG. Enzymes from this family were also described to be able to catalyze the phosphorylation reaction of KG [35,36]. Nevertheless, there are no data in the literature about KDG as a substrate of KGUKs, so we decided to explore the KGUKCnec activity also using KDG and a C4 epimer (KDGul) as substrates. KGul, KGal, and KDGul were prepared as recently published [37] by using pyruvate aldolases discovered from biodiversity. They were found to be able to use hydroxypyruvate and D-glyceraldehyde as nucleophile and electrophile substrates, respectively. In order to evaluate the catalytic properties of the KGUKCnec toward the five obtained compounds (Figure 4), the kinetic parameters of the enzyme were calculated ( Figure 2B-D). Kinetic parameters for the donor substrate ATP were evaluated as well, and the results are summarized in Table 2. KGal* ---- 1 KG substrate showed a sigmoid kinetics (see Figure 2A). Kinetic parameters were calculated by nonlinear regression in the Hill equation. Hill coefficient (n) = 1.4; K M = K 0.5 . * KGal was not found to be a KGUK substrate under our assay condition. The kinetic parameters showed that KDG is a substrate of KGUK but with less efficiency than KG (k cat /K M = 7370 and 16,770 s −1 M −1 , respectively). This was the opposite of what was observed in the KDGK enzymes: Kinase activity using both KDG and KG as substrates has been described in KDGKs from Sulfolobus tokodaii and T. thermophilus and, in both cases, KDG was the best substrate, whereas KG phosphorylation was less efficient [35,36]. KGul and KDGul were found as new substrates for KGUK Cnec , although they were converted with lower catalytic efficiencies than KG and KDG. On the other hand, KGal gave no reaction. KG appeared clearly as the best substrate (k cat /K M = 16,770 s −1 ·M −1 ), whereas its epimer on C3 and C4 (KGul) reacted 60-fold slower (k cat /K M = 246 s −1 ·M −1 ), revealing the importance of the stereochemistry (3S, 4R) within the active site. When a hydroxy group in C3 was missing (KDG), the enzyme maintained good efficiency, as evidenced by the decrease of only a half order of magnitude (k cat /K M = 7370 s −1 ·M −1 ). Nevertheless, without a hydroxy moiety in the third position, as well as using the epimer of KDG in C4 (KDGul), a drastic decrease in efficiency (k cat /K M = 83 s −1 ·M −1 ) was observed. Thus, configuration in C4 seems to be very important for KGUK activity. Synthesis of 2-ketogluconate-6-phosphate KGUK Cnec was used as a biocatalyst to prepare KGP at a preparative scale as the key product for metabolic studies. Thus, a biocatalytic system based on phosphorylation of KG with this new enzyme was developed. The reaction was first assayed on a small scale in order to find out the optimal conditions. An ATP regeneration system based on phosphoenolpyruvate (PEP)/pyruvate kinase (PK) system was implemented ( Figure 7B) to avoid both the difficulty in separating ADP from KGP and to circumvent inhibition by ATP at [ATP] > 5 mM ( Figure 3B). Indeed, the PEP/PK regeneration system has been proved to be compatible with a one-step purification of phosphorylated sugars via their precipitation as Ba 2+ salts [7]. Reaction progress was monitored by measuring pyruvate formation during ATP regeneration. from KGP and to circumvent inhibition by ATP at [ATP] > 5 mM ( Figure 3B). Indeed, the PEP/PK regeneration system has been proved to be compatible with a one-step purification of phosphorylated sugars via their precipitation as Ba 2+ salts [7]. Reaction progress was monitored by measuring pyruvate formation during ATP regeneration. The reaction was optimized by varying the concentrations of KG and PEP in Tris-HCl buffer (1.0 mL, 50 mM, pH 8.0), containing catalytic amounts of ATP (2.5 mM) and MgSO4 (4 mM), in the presence of KGUKCnec (0.35 U) and PK (1.7 U). KG and PEP were used in a maximum concentration of 50 mM. Four different KG/PEP substrate ratios were assayed for determining the optimal concentrations. PEP was used as either the limiting substrate (KG/PEP: 1.0/0.5, 1.0/0.7, and 1.0/0.9) or in excess (KG/PEP: 1.0/1.1). In all cases, final KGP accumulation was lower than 70%. The best yield ( Figure 8A) was obtained with 1.0/0.5 as the KG/PEP ratio (70% of KGP accumulated after 3 h of reaction). Optimization reactions were continued by increasing the final volume (2 mL) and decreasing the concentration of the limiting substrate (20 mM). In these new conditions, the best results ( Figure 8B) were found with a KG/PEP ratio of 1.0/0.7 (80% of KGP accumulation after 6 h). Due to the positive effect observed during dilution, a third reaction was finally implemented in a final volume of 2.5 mL with 14 mM of KG and a KG/PEP ratio of 1.0/0.8. The reaction was performed at room temperature and, under these latter conditions, the phosphorylated compound accumulation in the reaction media reached 100% ( Figure 8C) after overnight gentle stirring (100-200 rpm) (approx. 12 h). Thus, KGP could be obtained in 85% yield of pure barium salt product, corresponding to a 0.6 g scale. A single precipitation of KGP directly from the reaction mixture as its barium salt led to pure KGP, as depicted by the 1 H as well as 13 C NMR spectra available in the Materials and Methods section. Importantly, 2-ketogluconate-6-phosphate was for the first time fully characterized. Indeed, although this biocatalytic approach had previously been used for KGP synthesis [23,25], this product was only identified by TLC then. Four different KG/PEP substrate ratios were assayed for determining the optimal concentrations. PEP was used as either the limiting substrate (KG/PEP: 1.0/0.5, 1.0/0.7, and 1.0/0.9) or in excess (KG/PEP: 1.0/1.1). In all cases, final KGP accumulation was lower than 70%. The best yield ( Figure 8A) was obtained with 1.0/0.5 as the KG/PEP ratio (70% of KGP accumulated after 3 h of reaction). Optimization reactions were continued by increasing the final volume (2 mL) and decreasing the concentration of the limiting substrate (20 mM). In these new conditions, the best results ( Figure 8B) were found with a KG/PEP ratio of 1.0/0.7 (80% of KGP accumulation after 6 h). Due to the positive effect observed during dilution, a third reaction was finally implemented in a final volume of 2.5 mL with 14 mM of KG and a KG/PEP ratio of 1.0/0.8. The reaction was performed at room temperature and, under these latter conditions, the phosphorylated compound accumulation in the reaction media reached 100% ( Figure 8C) after overnight gentle stirring (100-200 rpm) (approx. 12 h). Thus, KGP could be obtained in 85% yield of pure barium salt product, corresponding to a 0.6 g scale. A single precipitation of KGP directly from the reaction mixture as its barium salt led to pure KGP, as depicted by the 1 H as well as 13 C NMR spectra available in the Materials and Methods section. Importantly, 2-ketogluconate-6-phosphate was for the first time fully characterized. Indeed, although this biocatalytic approach had previously been used for KGP synthesis [23,25], this product was only identified by TLC then. Thus, KGP could be obtained in 85% yield of pure barium salt product, corresponding to a 0.6 g scale. A single precipitation of KGP directly from the reaction mixture as its barium salt led to pure KGP, as depicted by the 1 H as well as 13 C NMR spectra available in the Materials and Methods section. Importantly, 2-ketogluconate-6-phosphate was for the first time fully characterized. Indeed, although this biocatalytic approach had previously been used for KGP synthesis [23,25], this product was only identified by TLC then. Methods 1 H (400 MHz) and 13 C (100 MHz) nuclear magnetic resonance (NMR) analyses were carried out with a Bruker Avance 400 MHz spectrometer. Mass spectra were recorded on a Q-exactive spectrometer from Thermos Scientific using an electrospray ionization (ESI). Cloning To amplify the kguK gene, chromosomal DNA from C. necator (R. eutropha) H16 (strain donated by Dr. Dieter Jendrossek, IMB, Univ. Stuttgart) was used. Primers for PCR with PwoI DNA polymerase were kguK-NdeI (5´TTTTCATATGAGCACCGATCTTGACGTGG 3´, engineered NdeI site underlined) and kguK-BamHI (5´TTTTGGATCCTCACAAACTGGCGGCCGC 3´, engineered BamHI site underlined). The amplified DNA was cut with NdeI and BamHI and ligated to a likewise cut pBluescriptSK vector (Agilent Technologies). The ligation mixture was then used to transform E. coli DH5α cells on LB-ampicillin plates (Amp 100 mg/L) with X-Gal (blue-white selection). White colonies were analyzed for correct insertion. The kguK-containing NdeI-BamHI fragment was then cloned into a likewise cut pET28a(+) vector (Invitrogen) with selection for kanamycin resistance. The presence of the cloned gene was verified by custom DNA sequencing (GATC Biotech, Konstanz, Germany). The vector pET28a-kguK Cnec thus encoded a N-terminal His 6 -tag fused to the protein (for expression as N-terminally his-tagged protein in order to simplify its purification procedure by IMAC). Expression and Purification The gene expression was done in E. coli BL21(DE3) pLysS with induction by IPTG. Expression of KGUK Cnec in BL21(DE3) pLysS cells was evaluated by SDS-PAGE. Colonies containing the plasmids pET28a-kguK Cnec were cultured in Luria-Bertani (LB) broth in the presence of kanamycin (30 mg/L) as a selection antibiotic at 37 • C under orbital shaking (200 rpm). When the culture reached an OD 600nm of 0.5, protein expression was induced by adding IPTG (0.5 mM final concentration), and the temperature was lowered to 30 • C. The culture was incubated for a further period of 12 h. Cells were harvested by centrifugation, washed twice, and resuspended in buffer A (50 mM NaH 2 PO 4 , 300 mM NaCl, pH 8.0). Cell suspension was disrupted by ultrasonication and the cell lysate was centrifuged at 10,000× g for 20 min. Clear supernatant (20 mL) was loaded onto a Ni 2+ -NTA-agarose resin column (Qiagen, h = 1.5 cm; Ø = 2.5 cm) pre-equilibrated with buffer B (buffer A plus imidazole 20 mM). The column was washed with buffer B, and the retained proteins were eluted with the same buffer containing imidazole at a concentration of 250 mM. Eluted fractions containing pure protein were pooled together and directly stored at 4 • C. In order to avoid possible interferences, imidazole was removed before each enzymatic experiment using a desalting column system (PD-10 sephadex G-25M columns, Pharmacia) pre-equilibrated with a buffer of 50 mM NaH 2 PO 4 , pH 8.0 (final buffer). The imidazole removal was carried out just before each experiment by loading 2 mL of IMAC purified enzyme into the G-25M columns, equilibrated with the final buffer. The elution fractions containing the enzyme (2 or 3 mL) were pooled together and its specific activity was assayed. No loss of activity was detected after the desalting procedure. Enzyme Activity Assays and Kinetic Studies Phosphorylation reactions involving different substrates catalyzed by KGUK were spectrophotometrically evaluated by measuring the release of ADP using a coupled assay with PK and LDH ( Figure 7A) [39]. A typical assay was performed in a 1 mL reaction mixture containing Tris-HCl ( Similar activity assays were used to evaluate the KGUK Cnec specificity for different phosphate acceptors, where KG was replaced by the corresponding analyzed substrate. One unit of kinase activity was defined as the amount of enzyme able to produce 1 µmole of 2-ketogluconate-6-phosphate (KGP) per min under the above conditions, using KG as the substrate. Assays to determine kinetic parameters were performed following the kinase activity at different substrate concentrations under the general conditions described above. Steady-state kinetic assays for kinase activity were measured at 25 • C in a total volume of 1 mL. Measurement of kinetic parameters for ATP was carried out with 2.1 µg/mL of purified KGUK Cnec and a constant excess of Mg 2+ of 5 mM in each assay point. KG was employed as the substrate (1.25 mM), and, as the phosphate donor, 14 different concentrations of ATP were used ( Figure 3A). To evaluate the effect of higher concentrations of ATP on the enzyme activity, additional assays were carried out at a Mg 2+ concentration of 25 mM to ensure the correct formation of the Mg-ATP complex ( Figure 3B). In order to avoid ATP excess inhibition, the maximum ATP concentration used in assays for different phosphate acceptors was 1.25 mM in each kinetic point. Measurements of kinetic parameters for KG were performed with 2.5 µg/mL of purified KGUK Cnec at 15 different KG concentrations (Figure 2A). Assays to determine the kinetic parameters for KDG [38] were performed with 1.5 µg/mL of purified KGUK Cnec at 10 concentrations of the substrate ( Figure 2B). Assays to determine the kinetic parameters for KGul and KDGul [37] were performed with 3.4 µg/mL of purified KGUK Cnec at 9 and 11 concentrations of the respective substrate ( Figure 2C and D). Kinetic constants were calculated using the built-in nonlinear regression tools in the software SigmaPlot 12.0 (Systat Software Inc). Phosphorylation of 2-keto-d-gluconate. Reaction Progress Monitoring Phosphorylation reactions were followed by measuring the accumulation of pyruvate formed during the ATP regeneration process (pyruvate kinase/lactate dehydrogenase) using the spectrophotometric assay described above (enzyme activity, Figure 7A). Assays were performed in reaction mixtures of 1 mL containing the reaction aliquot Tris-HCl (40 mM, pH 8.0), NADH (0.2 µmole), and LDH (2 U). One millimole of oxidized NADH was equivalent to 1 mmole of pyruvate, which was equivalent to 1 mmole of KGP formed. The reaction mixture was quenched by dropping the pH to 3 by adding HCl (5 M), resulting in partial precipitation of the enzymes. The pH was then adjusted to 6 by adding NaOH (5 M), and 2 eq of BaCl 2 dihydrate were added. The solution was centrifuged at 10,000 rpm at 4 • C for 10 min and the pellets were discarded. After partial concentration in vacuo, 5 volumes of ethanol were added. The solution was incubated overnight at 4 • C and then centrifuged. After one washing with ethanol followed by two other washings with acetone, KGP barium salt (molecular weight 476.5 g/mol) was recovered as a white powder in 85% yield (1.275 mmole, 0.608 g). Analysis The sample existed under two cyclic forms: α and β pyranoses, the latter being the major one. Due to an overlap of the signals, it was difficult to precisely quantify each form. 1 Conclusions We successfully cloned, overexpressed, purified, and characterized KGUK from C. necator H16, which was first reported in 1974 but was never studied thereafter. This enzyme was found to be unstable in its pure form. We succeeded in stabilizing it by storage in an imidazole solution which was removed just before use. For the first time, we demonstrated that KDG is a substrate for this enzyme and that some KG and KDG epimers can also be converted into the corresponding phosphorylated derivatives. The ketoacid moiety was necessary since the complete set of keto or aldoses assayed were not found as substrates. Finally, KGP was successfully prepared at a preparative scale, with a good yield, and was fully characterized.
6,650
2019-06-28T00:00:00.000
[ "Biology", "Chemistry" ]
EFFECTIVE BASEMETAL HEDGING: THE OPTIMAL HEDGE RATIO AND HEDGING HORIZON This study investigates optimal hedge ratios in all base metal markets. Using recent hedging computation techniques, we find that 1) the short-run optimal hedging ratio is increasing in hedging horizon, 2) that the long-term horizon limit to the optimal hedging ratio is not converging to one but is slightly higher for most of these markets, and 3) that hedging effectiveness is also increasing in hedging horizon. When hedging with futures in these markets, one should hedge long-term at about 6 to 8 weeks with a slightly greater than one hedge ratio. These results are of interest to many purchasing departments and other commodity hedgers. Journal of Risk and Financial Management 42 INTRODUCTION Hedging is considered an integral part of a competitive and successful commodity purchasing department.With raw material demand rising globally the strategic importance of hedging has never been as critical as it is today.Volatility in commodity markets continues to increase because of 1) political uncertainty and natural disasters, 2) the expanding global nature of trade and the resulting soaring demands from remote markets, and 3) a corresponding shift in manufacturing capacity as more product flow into the U.S. from abroad (Dickson et al. (2006)).Due to the increased volatility in commodity markets and strengthened global competition, companies can no longer rely on traditional approaches, such as strategic sourcing and volume aggregation, to manage their purchasing needs.Multinational firms no longer compete "…by exploiting scale and scope economies or by taking advantage of imperfections in the world's goods, labor, and capital markets" (Hansen and Nohria (2004)).Firms must rely more than before on risk management techniques to manage their materials exposure.These techniques include, but are not limited to, eliminating cost inefficiencies in operations, hedging commodity price risk with financial derivatives, and altering hedging horizons. Our study concentrates on optimal hedging ratios and horizons in the metals markets.Our results show that 1) the short-run optimal hedging ratio is increasing in hedging horizon, 2) the long-term horizon limit to the optimal hedging ratio is not converging to one but is slightly higher for most of these markets, and 3) hedging effectiveness is also increasing in hedging horizon.The best hedging decision for these markets is to hedge long-term at about 6 to 8 weeks with a slightly greater than one hedge ratio.These findings provide insights and a better understanding of the characteristics and properties that shape the effectiveness of futures commodity trading, insights that are valuable and relevant to the general commodity hedger. In 2003, a survey taken as part of the Corporate Executive Board Procurement Strategy Council (2003) revealed that 41% of risk managers believe that their procurement department will become significantly more important in the coming years and, critically, over 50% acknowledge that the effectiveness of their procurement organization's risk management division needs significant improvement.In fact, these managers ranked commodity price risk as more relevant than currency price risk by a 3 to 2 ratio.Consequently, it is no surprise that hedging demand in the metals markets is such that, over the period from Jan-June 2005 to Jan-June 2006, non-precious metals futures trading increased by 21% in volume and the volume for aluminum contracts alone increased by 32% (Holz (2006)).Wall Street is responding to the demand by hiring more traders and new product developers.Barclays aims to hire 20% more staff in 2007 after it already increased staff by 35% the previous year (Freed (2007)).Market demand projections see no end to this trend.In the aluminum market, demand is projected to grow by 9.4% in 2007, following on the 2006 8% growth.This matches unfavorably with the projections in supply.The International Primary Aluminum Institute forecasts an increase in production in 2007 of 6.5% and an increase in 2008 of 3.4%.While metals producers can expect profitable years, metal consumers are faced with difficult choices and reduced profitability.Market conditions point to the need for a concerted risk management policy at the corporate level. The hedging literature is vast and covers both the motives for hedging and the strategies used to address these motives.For the current study, it is important to recall two areas of the literature.First, one branch of the literature aims to justify the use of hedging by procurement divisions (Froot et al (1993), Hansen and Nohria (2004), Koppenhaver and Swidler (1996)), while the second helps determine how best to select optimal futures positions that minimize the risk inherent in the spot (cash) market (chronologically, Fletcher and Ward (1971), Benninga et al (1984), Perron (1989), Baillie and Meyers (1990), Chowdhury (1991), Lien and Luo (1993), Geppert (1995), Alexander (1999), Chen, Lee and Shrestha (2004)).This study is an investigation into the optimal hedge ratio and hedging effectiveness for base metals. Hedging in futures markets involves taking a futures position opposite to that of a spot market position (Institute for Financial Markets (1998)).For commodity purchasing departments, the futures markets effectively represent a pricing mechanism in the commodity purchasing process.One common definition of the optimal hedge ratio is "…the ratio of the covariance between spot and futures prices to the variance of the futures price" (Myers and Thompson (1989)).Intuitively, the optimal hedge ratio defines the futures market position that will simultaneously minimize the risk absorbed in the spot market or, plainly, what amount of the commodity should be hedged with futures. We also look specifically at the hedging horizon, as previously studied by Chen, Lee, and Shrestha (CLS) (2004) using cointegration to estimate the optimal hedge ratio, to determine whether hedging effectiveness improves across greater hedging time horizons. This study analyzes the six base metals traded on the London Metal Exchange (LME): aluminum, copper, lead, nickel, tin, and zinc.The use of LME base metals is beneficial given its global acceptance as the world's leader in metal futures trading.It is also interesting to study these futures and their respective hedging effectiveness given their dramatic upswing in volatility over the past few years: the six base metals volatilities increased by 174% on average. The paper first presents a review of the academic literature then Section III presents the empirical questions.In Section IV, we present the data and the methodology. Section V reports the results and we conclude in Section VI. II. LITERATUTE REVIEW Our study builds on the last 25 years of the optimal hedge ratio literature.Our empirical models for estimation are based on the body of research that started with Ederington in 1979.This research area evolved through three phases.First and notably, Ederington (1979) established the first empirical models; later more sophisticated techniques of GARCH estimation were applied, and most recently approaches of cointegration have been used.Ederington (1979) is the first to empirically estimate optimal hedge ratios and is accordingly credited with formulating the theoretical framework.Ederington summarizes the three working theories of hedging at the time: 1) Traditional Theory, 2) Theories of Holbrook Working, and 3) the Portfolio Theory.He finds fault with Traditional Theory, the leading theory at the time.Ederington challenges its convenient yet unrealistic assumption that a change in futures price is exactly proportionate to a change in cash prices.Ederington argues that the theories of Holbrook Working improve on the inherent weakness of the Traditional Theory by bringing light to the fact that most hedgers do account for the dynamic information the cash-futures basis provides at the time the hedge is placed.Still, the study argues that a more realistic approach is to view hedging in a risk and return framework best formulated by an approach that combines Portfolio Theory and Working's Theory.This provides rationale as to why a hedger may at different times be either hedged or completely un-hedged. Ederington's seminal contribution to the optimal hedge ratio literature is the empirical finding that even pure risk minimizers will hedge less than their spot market requirements which is contrary to the findings of preceding research.Moreover, he finds that hedging effectiveness improves across two time horizons for financial security futures.Specifically, his findings show that the futures markets for two financial securities prove to be more effective hedging instruments over longer periods.However, the limitation of only using two time horizons, along with the arbitrary method of defining a long period as four weeks and a short period as one week, jeopardizes the applicability of Ederington's conclusions.Furthermore, the study assumes that the minimum variance hedge ratio is simultaneously the optimal hedge ratio without formally proving or interpreting this relationship.A second related weakness lies in the assumption that a hedger who maximizes profit will simultaneously be minimizing the variance of the hedge. In consideration of these limitations, several important studies quickly addressed these concerns.Benninga, Eldor, and Zilcha (1984) respond first, finding fault in the latter of the two weaknesses.Benninga et al. (1984) find that assuming a hedger has a quadratic utility function presents 'undesirable properties' for estimation and also point out that the assumption that the minimization of producer income variance is equivalent to the optimal hedge ratio is theoretically inappropriate.Instead, Benninga et al. do prove that, in unbiased futures markets, the minimization of income variance is equivalent to the optimal hedge ratio. Benninga et al. make two assumptions: 1) the futures price is an unbiased predictor of the future spot price, [F 0 =E 0 (F 1 )=E 0 (P 2 )], and 2) the regressibility of spot prices on futures price, [P 1 = α + βF 1 + ε] where ε is homoscedastic.F 0 represents the futures price at t=0, F 1 represents the futures price at t=1, and P 2 represents the spot price at t=2.Therefore, both F 1 and P 2 are unknown prices that the producer faces in everyday hedging decisions.In unbiased markets, the only reason for the producer to hedge is to minimize risk, given that on average there will be little to gain in an unbiased market. Therefore, the optimal hedge is where X=βQ with Q representing the quantity required in the spot market and X representing the optimal amount hedged on the futures market. Assumption 2 may be econometrically troublesome since the use of price levels can lead to autocorrelation with the residuals.Therefore, using price changes, [(P 1 -P 0 ) = α + β(F 1 -F 0 ) + ε] rids the model of autocorrelation.This model still yields the optimal hedge ratio under the assumption of unbiased futures markets.The only uncertainty remaining in the producer's expected income is the residual and the regression coefficient, β, is the minimum variance hedge ratio.The strength of their results "…derives from its generality (it is free from assumptions about utility functions) and from the ease of its applicability (it requires only a regression analysis to derive the optimal hedge ratio)" (Benninga et al (1984)). Following the research by Benninga et al (1984), the empirical estimation of the optimal hedge ratio was improved by accounting for cointegration between spot and futures prices.One of the key findings is that spot and futures prices tend to drift together over time.Chowdhury (1991) proves that "…the market efficiency hypothesis requires that the current futures price and the future spot price of a commodity are close together."This follows from the definition of market efficiency which implies that current prices should reflect all current and past price information in establishing current market prices.Chowdhury uses price data from the LME to test the hypothesis of market efficiency (cointegration) for copper, lead, tin, and zinc.1 Cointegration is found between the four base metals studied suggesting that the use of conventional estimation techniques to estimate the optimal hedge ratio would lead to over-hedging.A model that fails to incorporate the long run co-movement between variables does not capture the mean reverting tendency of the model, which leads to an upward bias in the point estimates in the model.Lien and Luo (1993) address the problem of over-hedging by estimating the optimal hedge ratio using an error correction model to account for the issue of cointegration the Chowdhury study raises.Lien and Luo run their estimation at 9 hedging horizons and find that the optimal hedge ratio tends to fluctuate before converging towards one suggesting that the optimal hedge ratio converges to the naïve hedge ratio over time.These findings were later augmented by Geppert (1995), who establishes that hedging effectiveness and the optimal hedge ratio both depend on the permanent and transitory components of the price changes between spot and futures prices."Over long horizons, the shared component ties the spot and futures series together and the two prices will be perfectly correlated" (Geppert (1995)).A major weakness in the Geppert study is the model requirement that both spot and futures prices be I(1) to implement the Stock and Watson (1988) methodology suggested in the study. It would be useful to adopt a methodology that provides valid hedge ratios when the unitroot condition is not satisfied. Such a study is Chen, Lee and Shrestha (CLS) (2004).CLS empirically estimate the optimal hedge ratio with a cointegration methodology that does not require both the spot and futures prices to contain a single unit root.They are able to estimate both the short-run and long-run hedge ratios with the Pesaran et al ( 2001) approach that does not require both series to be I(1) or I(2) together.This approach works when prices are unit root processes and when they are stationary.In all, 9 different hedging horizons are considered over 25 different commodities.As expected, they find that the futures and spot prices share a stochastic trend implied theoretically by market efficiency and the noarbitrage condition.In estimating the optimal hedge ratios they find that hedging effectiveness does improve over greater hedging horizons and that the short-run hedge ratio is significantly less than one.Our study of the six LME metals follows the CLS methodology. III. EMPIRICAL QUESTIONS In principle, futures markets exist to offer buyers and sellers of the underlying commodities, financial instruments, or index the opportunity to minimize the price risk inherent in cash market positions.These open markets allow for better price discovery. Moreover, futures markets are appealing to firms because of high liquidity and ease of entry/exit properties.Various businesses across the globe utilize these advantageous properties to manage price risk exposure.This translates to firm cost savings as they mitigate their risk exposure.Firms especially adept at risk management will likely survive periods of high price risk and volatility.Given the recent competitive nature of the commodity landscape, firms are implementing and plan to implement multitudes of hedging strategies to trim the costs of elevated commodity prices. In commodity purchases, hedging using futures contracts can be thought of as offsetting the risk imposed by a firm's commodity requirements.A firm that requires a fixed amount of copper in the production of their good would want to offset their market price risk by buying copper futures against their annual requirements.Under a futures contract, the price is set for delivery at a future date.Therefore, if the trader is anticipating a bullish copper market, she would be wise to assume a long position defined as buying deferred month futures contracts.This allows the trader to realize this gain in futures prices which would alleviate the upside price risk in the spot market. Hedging price risk involves not only when to be short and when to be long but it also requires a thorough understanding of the long-run relationship between the spot and futures markets.This may be the most important element in an efficient commodity purchasing department because it ultimately reveals how effective a department is at using the price discovery relationship in formulating hedging strategies.The price discovery relationship implies that spot and futures share a long-run stochastic trend; thus, an effective hedging department would understand that over longer hedging horizons prices tend to revert to the mean together.For these reasons, the hedging horizon is the key issue being addressed in this research.Given the volatile and upward trending data employed in this study, it seems appropriate to hypothesize that the hedging effectiveness of a firm with a comparatively longer hedging horizon would be much more effective in minimizing risk over our data period.The current research consensus is that spot and futures markets move together over long horizons.This implies that a firm facing adverse upside price risk would be wise in lengthening their hedging horizon to offset the unfavorable prospect of increasing spot market prices. Let us look at a trading scenario in the aluminum futures market to emphasize the importance of effective risk management.Actual Aluminum Market Time Spread because they are always in demand of (buying) aluminum to package their respective products.Aluminum has recently experienced a 41% increase in its mean historical futures price.Likewise, the spot market prices followed this trend but in an often erratic and unpredictable fashion.This naturally introduced a considerable amount of basis risk, making the hedging decisions by commodity traders within these companies difficult at best.Basis risk is the unexpected fluctuations in the prices of cash and futures that is a product of influences ranging from seasonality to supply disruptions.All of these firms likely would have endured this period unsuccessfully without the use of some form of hedging strategy. Consider a beverage company, similar to one of the firms mentioned above, with an annual and realistic aluminum requirement of 100,000 metric tons (MT).The standard aluminum contract is specified for 25 MT at some point in time for future delivery.Now, consider the price of $1,322 for cash aluminum in October of 1998 and compare it to the prices prevailing in May of 2000. The market in October was in contango as indicated by the futures price being greater than that of the market.Therefore, pursuing the recommended strategy above would lead to hedging the spot market position of 100,000 MT. IV. ECONOMETRIC METHODOLOGY Table 1 shows the six metals markets our data set covers.All these metals are traded on the London Metal Exchange: Aluminum, Copper, Lead, Nickel, Tin, and Zinc. Our dataset is longer than those in previous studies and provides the daily close price for both the cash and futures prices dating back to July of 1998 and up to October 2006.The futures data is collected from Futuresource, a database specifically designed for commodity traders.The futures price data represent the near-by futures contract or the contract with the closest settlement date and rolled over 10 days prior to expiration.Cash prices used are very closely related to the second bell close on the LME, since nearly all metals pricing is based on this quote. [INSERT TABLE 1 HERE] Table 2 illustrates the recent increased volatility in the metal markets: the price standard deviation increased across the six metals by an average of 174%.The table also reports the ratio of the standard deviation to the mean price to indicate how volatility increased in proportion to the average price for all six of the base metals.This statistic indicates that both mean prices and standard deviation increased over the period.Figure 1 illustrates the increased volatility prevailing in the current commodity landscape.The October 16 and the mean futures price increasing over 77%.Nickel's price path parallels that of copper with its price more than doubling (124%).Again, the record high was established on October 16.Tin increases modestly in comparison to Lead and Nickel with a much less dramatic increase of 51% with a record high being set on October 16. [INSERT FIGURE 1 HERE] [INSERT TABLE 2 HERE] Empirically, the estimation follows the derivation provided by Benninga et al. (1984).First, let's assume that a commodity purchasing department for a beverage company has to buy some quantity (Q) of aluminum at t=1.The price (P 1 ) at period t=1 is uncertain since one is unable to predict future prices.The commodity trader can purchase futures (F 0 ) at t=0 to offset the uncertainty of the price (P 1 ) at t=1.The income of the firm after implementing the hedge is, therefore, represented in equation ( 1) below, where F 1 represents the futures price at t=1 and X represents the trader's hedge. In this case, the quantity X represents a long position in the futures market and the difference in the two futures prices will establish whether the hedge was favorable. In order to derive the optimal hedge ratio, one must assume that the futures market is an unbiased predictor (market efficiency) of the spot market which is denoted below in equation ( 2).This assumption is not unrealistic given the wide body of research on cointegration that indicates that futures and spot prices do share a mean-reverting relationship in the long run (Lien and Luo (1993), Geppert (1995), Alexander (1999), CLS ( 2004)).It is also assumed that the spot price shares a linear relationship with the futures market or that spot prices can be regressed on futures prices.This holds if ε, the error term, is not correlated with F 1 (Benninga et al (1984)). F 0 =E 0 (F 1 )=E 0 (P 2 ) (2) Subsequently, the variables are differenced to rid the model of this inherent problem as illustrated below in equation ( 4).All the assumptions still hold if equation ( 4) is estimated in favor of equation ( 3). (P 1 -P 0 ) = α + β (F 1 -F 0 ) + ε (4) Equation ( 5) replicates equation ( 1) but in this case the dependent variable is included to capture the income of the firm after the hedge is completed. The expected income of the firm is found to equal the cost of the spot market requirement under the unbiasedness assumption in equation ( 2).This relationship is denoted below in equation ( 6), where the two futures prices cancel out under the assumption of unbiasedness.The only reason remaining to hedge is to minimize the risk that the commodity poses. E 0 (I) = Q*E 0 (P 1 ) + X*(F 0 -E 0 (F 1 )) = Q*E 0 (P 1 ) (6) If the commodity trader allows his hedge position to equal the product of the coefficient in the regression equation (β) with the physical requirement of the commodity (Q) then equation ( 7) below follows.This is the result of substituting (β * Q) for X in equation ( 5). Solving equation ( 3) for (P 1 -β F 1 ) allows the substitution of (α + ε) into equation (8) below: Equation ( 8) proves that the optimal hedge ratio is X = (Q β) and it indicates that the only remaining uncertainty in the equation is in the error term which, by definition, cannot be hedged.Therefore, all income variance is eliminated and the only reason for the trader to hedge is to minimize the risk variance captured by (Q β).This finding proves that the minimum variance hedge ratio is also the optimal hedge ratio. Equation ( 9) represents the minimum variance hedge ratio defined by Ederington (1979) when the trader/producer is attempting to minimize income variance. The minimum variance hedge can also be represented as equation ( 10) below with the use of simple differentiation: Note that X/Q is equivalent to β, the coefficient representing the hedge ratio in equation ( 4), which is also equivalent to the expression Cov (P 1 , F 1 ) / Var F 1 . Given this proof, it is theoretically valid to empirically estimate the optimal hedge ratio with the differenced form equation ( 4) above.Before estimating this model, it needs to be addressed how the optimal hedge ratio will be estimated for the different hedging horizons.These estimation techniques are produced in the studies by Geppert (1995) and CLS (2004).Both studies prove that the price changes (∆P t and ∆F t ) in equation ( 4) should be k-period differenced to properly estimate a respective k-period hedging horizon optimal hedge ratio.Simply put, this means that the frequency of the data must match the hedging horizon of the estimated optimal hedge ratio.A major drawback in the Geppert study is the use of overlapping differencing to prevent the sample size from becoming too small.As CLS points out, such a method produces correlated observations which lead to a regression that has autocorrelated error terms.This should be avoided to eliminate the upward bias in estimates of the statistical significance of coefficient estimates.The sample size in the present study is large enough to warrant the use of non-overlapping differences which prevents the troublesome properties of autocorrelated error terms produced by overlapping differencing. The next step in the methodology is to test for unit root in the prices for both the spot and futures in all six of the base metals.This is necessary because, as market efficiency implies, futures and spot prices should move together over time.Under market efficiency, if the futures move in one direction then so do the spot prices, implying that if both series are I(1) then they also should be cointegrated.Perron (1989) unit root tests are performed to account for the breaks in the data that are quite obvious when visually examining Figures 1-6.This method tests for stationarity after detrending the series and allowing for structural breaks.The structural breaks in this test should be exogenous. This is easily supported in the base metals as speculative hedge funds have increasingly emerged in commodity markets to create more balanced portfolios.This phenomenon has coincided with the price increases outlined in Figures 1-6 and would be difficult to conceive as anything but exogenous in the causality of futures prices.Detrending the series using both slope and intercept shifts are employed after several updates to the study have shown this method to be preferred (Pesaran (1997)).Choosing the break points for these tests is done by visually examining the data to determine the break in the data which is used in estimating the test statistic. X in equation ( 13).Furthermore, the second half of the data set will be used in calculating the remainder of the coefficients with Q being set to 1. Ultimately, this equation represents the amount of variance reduced with the implementation of the hedge above and beyond that of an unhedged position. V. RESULTS The first part of the methodology involves testing for unit root or the stationarity of the variables.Table 3 shows the results of the unit root tests conducted on the weekly data for each market.All the variables except for the futures prices on zinc appear to be I(1) or integrated of order 1.The λ represents the proportion of the sample at which the break point occurs, measured from the beginning of the data sample to the breaks, which are determined visually. 2The finding on zinc might be attributed to the low power of unit root tests.In any case, the test statistic is close to passing the test and would hypothetically pass at the 12% level of significance.The DF-GLS test was also used to provide further insight into the results, and the finding from this test shows that zinc does in fact have unit root.These findings coupled together point to zinc futures being I(1). The fact that the cash prices have unit root suggests the futures should as well, given the no-arbitrage condition and market efficiency condition assumed in the literature (CLS ( 2004)).Therefore, all prices are assumed to suffer from unit root. [INSERT TABLE 3 HERE] 2 A range of possible break points were selected including the minimum, mean, and maximum.All three of these were tested with their respective lambda statistic and all proved to change the results very little.Also, the test statistics were calibrated as needed to more appropriately capture a break that falls between the values offered in the study.These were altered approximately 0.75 for each incremental move away from the lambda statistic to produce more reliable estimates. Given that all the variables appear to be integrated of order 1, the optimal hedge ratios are calculated for 9 hedging horizons ranging from one day, one week to eight weeks.The results are reported in Table 4.All the estimates in Table 4 prove to be significant at the 1% level of significance.Estimation of the ratios is performed using simple OLS from equation ( 4).The variables are differenced to account for unit root and autocorrelation.Ultimately, all the optimal hedge ratios do not converge towards one across greater hedging horizons.Many of them do appear to fluctuate across the horizons but each of the markets exhibit a distinct trend (except aluminum) towards a value greater than one.The very short horizon (one-day) optimal hedge ratios are all less than 0.65 but, as soon as the differentiation frequency is increased to 1-week, the optimal ratio increase to a range from 0.83 (Tin) to 0.99 (Nickel).The ratio at the 4-week horizon are all greater than 1, ranging from 1.00 (Aluminum and Copper) to 1.11 (Nickel).At the longest time horizon we study, the optimal ratios range from 1.00 (Aluminum) to 1.17 (Nickel).Overall, the average (median) 8-week hedging horizon across the six metals is roughly 1.074 (1.066).Empirically, this means that the trader should be hedged 7.4% above the respective spot position.This finding is contrary to the findings of CLS and Geppert who both found that the optimal hedge ratio converges to one across greater time horizons.Table 4 suggests that, in general, the proportion of spot positions to be covered by opposite positions on futures markets is greater than one.This finding is of importance, but at this point, should be considered preliminary since the I(1) prices in this study are assumed to trend together over time which can lead to misleading results in an OLS regression (Chowdhury (1991)). [ INSERT TABLE 4 HERE] Given that all the variables in the model contain unit root, it is anticipated that all the relationships between spot and futures prices share a long-run stochastic trend.Table 5 verifies that each of the 6 markets studied do share a mean-reverting relationship, as in each case the test statistic is greater than the upper I(1) bound found in the Pesaran study. The test employed here has two variables (k), an intercept, and no trend.The 10% critical value is 4.14 in this case, which means that for the series to be cointegrated the test statistic must be greater than the 4.14 test statistic.The use of this test improves on several earlier studies that used the Engle-Granger method.Using this test takes advantage of the minimum variance criterion used in the test that is also used in the risk management application of this study (Alexander (1999)).These tests were reinforced with the Engle-Granger test that provided the same conclusions as the Pesaran approach. [ INSERT TABLE 5 HERE] Having confirmed that all the variables within each respective market are cointegrated, the associated joint estimation that ties this long-run co-movement together is performed.The estimation approach is CLS's which jointly estimates the long-run and short-run hedge ratios.Table 6 presents the result from this approach and it is apparent that the results are very similar to that of the previous short-run estimation.This estimation, which correctly includes the long-run properties of the cash-futures week horizon to -2.7%.Aggregating all other horizons reported in Table 6, the difference narrows to an average 0.1% confirming the convergence but we should note that the sign of this difference is not consistent either across horizon or across markets. Finally, Table 7 presents the findings of how effective these optimal hedge ratios would be in a portfolio consisting of cash and futures positions.All the metals are considered in this example to thoroughly evaluate the effectiveness of the hedges.All the values appear to exhibit a common trend towards the mid-90% across the hedging horizons.The hedging effectiveness value represents the percentage reduction in variance over and beyond a portfolio unhedged.It is evident that these optimal hedge ratios are useful in minimizing variance but even more important, the hedges improve across the time horizons.Namely, a hedge may be more favorable as the hedging horizon is lengthened given the nature of price discovery in the spot and futures relationship. [INSERT TABLE 7 HERE] A viable question in commodity purchasing departments is: how far out a company should hedge given the nature of the commodity landscape?The empirical evidence contained in this study indicates that, in general, a longer hedging horizon may help mitigate the risk in the spot market.The results provided in Table 7 indicate that the optimal hedging horizon should be at 8-week or the longest hedging horizon considered in this study.This statement is not saying that the 8 week effectiveness value is always greatest at this horizon, as in the case of aluminum the 6-week horizon is preferred to the 8-week horizon.Rather, it is evident that these values are generally asymptotically improving across the horizons and therefore, it is inferred that this would also occur across a broader dataset.A longer hedging horizon is the course of strategy advocated in this study. VI. CONCLUSIONS This study investigates the optimal hedge ratio and hedging effectiveness for six base metals markets.After applying careful econometrics methods, we first document that the short-run optimal hedging ratio is increasing in hedging horizon.If a corporate hedger is attenuating demand risks for his company with a longer time-frame in the futures market, he should increase his exposure to the futures market as his hedging horizon lengthens.Second, we show that the optimal hedging ratio, contrary to results in other markets, does not converge to the naïve ratio of 1 for our markets over our timeperiod over longer time horizons.We document that the appropriate position a hedger should take is to over-hedge by over 5% in order to best minimize price impacts.Finally, we find that hedging effectiveness for the optimal hedging ratios we computed in an outof-sample methodology is very high in the mid-90's in percentage terms.In other words, implementing a hedge with the hedge ratios we determined would eliminate over 90% of price uncertainty for large corporation procurement departments.Overall, the best hedging decision for these markets is to hedge long-term at about 6 to 8 weeks with a slightly greater than one hedge ratio.These results are robust to the increased volatility over our data period and are of great interest to many purchasing departments and other commodity hedgers. FIGURE 1 Figure 1 graphs in two panels the complete time series of data used in the study.In each panel, using the same scale, we highlight the dramatic price increase experienced by the metals markets over the study period.From these representations, we determine break points which are reported in Table 2. TABLE 2 -Descriptive Statistics Table 2 reports sample descriptive statistics for the cash prices for all 6 metal markets investigated in the study.Over the sample period, each of these markets exhibited a large change in both price level and volatility level.The table reports the mean, maximum, minimum and the standard deviation of prices for each market for the two distinct periods: before the price level change break and after the price level change break.The break points are determined visually from the historical price charts and are reported in the table below.In addition, the table reports the ratio of volatility to level of prices (σ/µ) before and after the break to confirm that the break represents both a change in level and a change in volatility in prices.12): Metal (P 1 -P 0 ) = α 1 + α 2 P t-1 + α 3 F t-1 + β (F 1 -F 0 ) + ε where the (short-run) MV Hedge Ratio reported is the point estimate of β in Equation ( 12).The table also contains the standard deviation of the estimate and the adjusted R-Square for that estimation.The long-run MV Hedge ratio is computed as -α 3 /α 2 and is also reported.The analysis is repeated at different level of differentiation from as short as one day to as long as 8 weeks.Due to data constraint (our time series contains 433 weeks worth of data), we limit our longest hedging horizon to 8 weeks to insure our results remain statistically meaningful. figure shows the dramatic upward shift in prices that has occurred in all six of the This strategy would lock in the price of $1,357/MT on October of 1998 for delivery in May 2000.Assuming away transaction costs, this simple hedging strategy would save the hypothetical firm $14M dollars (= (1,498-1,357) * 100,000).The questions a commodity hedger has to answer before implementing her strategies include: what is the best hedge ratio and what is the best time horizon for this hedge?Our methodology allows us to answer these two questions. TABLE 5 -Pesaran Cointegration Tests Table5reports the results of test statistics about the cointegration of the data series.Specifically, the Pesaran cointegration test (1997) is run.The test employed has two variables (k), an intercept and no trend.The 10% critical value is 4.14 in this case.Cointegration was also found to be the case in Engle-Granger tests using ADF and the Engle-Granger test statistics. TABLE 6 -Joint Estimation of the Short-Run and Long-Run MV Hedge Ratios Table 6 reports the empirical results of estimating the optimal minimum variance hedge ratio for each of the six metal markets.The estimation in this table relies on Equation (
8,547
2008-12-31T00:00:00.000
[ "Business", "Economics" ]
YNU-HPCC at SemEval 2017 Task 4: Using A Multi-Channel CNN-LSTM Model for Sentiment Classification In this paper, we propose a multi-channel convolutional neural network-long short-term memory (CNN-LSTM) model that consists of two parts: multi-channel CNN and LSTM to analyze the sentiments of short English messages from Twitter. Un-like a conventional CNN, the proposed model applies a multi-channel strategy that uses several filters of different length to extract active local n-gram features in different scales. This information is then sequentially composed using LSTM. By combining both CNN and LSTM, we can consider both local information within tweets and long-distance dependency across tweets in the classification process. Officially released results show that our system outperforms the baseline algo-rithm. Introduction Social network services (SNSs) such as Twitter, Facebook, and Weibo are used daily to express thoughts, opinions, and emotions. In Twitter, 6000 short messages (tweets) are posted by users every second 1 . Therefore, Twitter is considered as one of the most concentrated opinion-expressing venues on the Internet. Subjective analysis of this type of user-generated content has become a vital task for politics, social networking, marketing, and advertising. The potential application of sentiment analysis has been the motivation behind the SemEval 2017 Task 4, which is a competition involving a series of subtasks that focus on Twitter sentiment classifications. Subtask A involves message polarity classification, which requires a system to classify 1 http://www.internetlivestats.com/twitter-statistics/ whether a message is of positive, negative, or neutral sentiment. Subtasks B and C involve topicbased message polarity classification, which require a system to classify a message on two-and five-point scales toward a certain topic. Various approaches have been proposed to analyze sentiment of text, and deep neural network has achieved state-of-the-art results in recent years. Proven successful text classification methods include convolutional neural networks (CNN) (LeCun et al., 1990;Y. Kim, 2014;Kalchbrenner et al., 2014) and Long Short-Term Memory (LSTM) (Hochreiter et al, 1997;Tai et al., 2015). In general, CNN applies a convolutional layer to extract active local n-gram features, but lost the order of words. By contrast, LSTM can sequentially model texts. However, it focuses only on past information and draws conclusions from the tail part of texts. It fails to capture the local response from temporal data. In this paper, we propose a multi-channel CNN-LSTM model for sentiment classification. It consists of two parts: multi-channel CNN, and LSTM. Unlike a conventional CNN model, we apply a multi-channel strategy that uses several filters of different length. The model is thus able to extract active n-gram features of different scales. LSTM is then applied to compose those features sequentially. By combining both CNN and LSTM, both local information within tweets and long-distance dependency across tweets can be considered in the classification process. To train the proposed neural model effectively using many parameters, we pretrained the model using a distant supervision approach (Go et al., 2009). In our experiment, we presented our participation of the proposed model for the SemEval 2017 Task 4 Subtasks A, B, and C (Rosenthal et al., 2017). The remainder of this paper is organized as follows. In Section 2, we detail the architecture and multi-channel strategy of our model. Section 3 summarizes the comparative results of our proposed model against the baseline algorithm. Section 4 offers a conclusion. Figure 1 shows the architecture of our model. The model consists of six types of layers: embedding, convolution, max-pooling, LSTM, dense, and softmax. First, a tweet is input as a series of vectors of constituent words and transformed into a feature matrix by an embedding layer. The feature matrix is then passed into three parallel CNNs having different filter lengths. The max pooling layer extracts the max-over different CNNs results that are intended to be the salient features, and input them to the LSTM layer. Then, normal dense and softmax layers use outputs from LSTM and output the final classification result. Embedding Layer The embedding layer is the first layer of the model. Each tweet is regarded as a sequence of word tokens t1, t2, …, tN, where N is the length of the token vector. According to statistics of tweets collected from twitter in Section 3.1, about 95% tweets is shorter than 30 words. Thus, we empirically limit the maximum of N to 30. Any tweet longer than 30 tokens is truncated to 30, and any tweet shorter than 30 is padded to 30 using zero padding. Every word is mapped to a d-dimension word vector. The output of this layer is a matrix Nd T   . CNN Layer In each CNN layer, m filters are applied to a sliding window of width w over the matrix of previous embedding layer. Let wd F   denote a filter matrix and b a bias. Assuming that Ti:i+j denotes the token vectors ti, ti+1, …, ti+j (if k > N, tk= 0), the result of each filter will be , where the i-th element of f is generated by: where  denotes convolution action. Before processing f to the next layer, a nonlinear activation function is applied. Here, we use ReLU function (Nair and Hinton, 2010) for faster calculation. Convolving filters with window width w can extract wgram feature. By applying multiple convolving filters in this layer, we can extract active local n-gram features in different scales. To keep output sizes of different filters identical, we apply zero padding to token vectors before convolution. Max-over Pooling Layer In this layer, the maximum value from different filters is taken as the most salient feature. Because the CNN layer with window width w can extract wgram features, the maximum values of the output from CNN layer are considered the most salient information in the target tweet. We choose max rather than mean pooling because the salient feature represents the most distinguishing trait of a tweet. LSTM Layer The architecture of a recurrent neural network (RNN) is suitable for processing sequential data. However, a simple RNN is usually difficult to train because of the gradient vanishing problem. To address this problem, LSTM introduces a gating structure that allows for explicit memory updates and deliveries. As shown in Figure 2, LSTM calculates hidden state ht using the following equations:  Gates:  Input transformation:  State update: where xt is the input vector; ct is the cell state vector; W, U, and b are layer parameters; ft, it, and ot are gate vectors; and σ is a sigmoid function. Note that  denotes the Hadamard product. Hidden Layer This is a fully connected layer. It multiplies results from the previous layer with a weight matrix and adds a bias vector. The ReLU activation function is also applied. The result vectors are finally input to the output layer. Output Layer This layer outputs the final classification result. It is a fully connected layer using softmax as an activation function. The output of this layer is a vector calculated by: where x is the input vector, w is the weight vector, and K is the number of classes. Thus, the final classification result ̂ will be: ˆarg max ( | ) j y P y j x  (6) Data Preparation We implemented a simple tokenizer to process tweets into array of tokens. Because we are only participating in English tasks, all characters other than English letters or punctuations are ignored. Every tweet is applied with the patterns shown in Table 1. We applied the first four patterns and lowered all letters to accommodate the known tokens in GloVe (Pennington et al., 2014) pretrained word vectors. Before training on given tweets, we pretrained the model using data with distant supervision. Two external datasets were used. The first was crawled from Twitter. Thanks to the streaming API kindly provided by Twitter, we collected approximately 428 million tweets (all tweets were published between Nov. 2016 and Jan. 2017). Approximately one sixth of them had only one emoji or emoticon 2 , which perfectly fit the condition for weak labeled. Table 1: Example of pre-processing pattern. The second dataset was from Sentiment140, which provided 1.6 million balanced-distribution tweets. We used GloVe pretrained data 3 to initialize the weight of the embedding layer. GloVe is a popular unsupervised machine learning algorithm to acquire word embedding vectors. It is trained on global word co-occurrence counts and achieves state of the art performance on word analogy datasets. In this competition, we used the 200-dimension word vectors that were pretrained on two billion tweets. The hyper-parameters were tuned in train and dev sets using the scikit-learn (Pedregosa et al., 2012) grid search function, which can iterate through all possible parameter combinations to identify the best performance. The best-tuned parameters include as follows. The CNN filter count is m = 200; the length of multi-convolving filters are 1, 2, and 3; and the dimension of the hidden layer in LSTM is 512. To prevent over-fitting, we also applied dropout (Tobergte and Curtis, 2013) after LSTM layer and fully connected layer at rate of 0.5. The training also runs with early stopping (Prechelt, 1998), terminating processing if validation loss has not improved within the last 5 epochs. Evaluation Metrics We evaluated our system on Subtasks A, B, and C. Subtask A was a message polarity classification of three points. Subtasks B and C involved ordinal sentiment classification of two and five points. Metrics of Subtasks A and B were average F1-score, 3 http://nlp.stanford.edu/projects/glove/ average recall, and accuracy. The F1-score was calculated as: where 1 p F is the F1-score of one class (p denotes positive here as an example), p  and p  denote precision and recall, respectively. Metrics of subtask C were MAE M and MAE μ , which were calculated as: where yi is the true label of item xi, h(xi) is the predicted label, and Tej is the set of test documents whose true class is cj. A higher F1-score, recall, accuracy, and a lower MAE μ and MAE M value indicate more accurate forecasting performance. Results and Discussion To prove the advantages of our system architecture, we ran a 5-fold cross validation on different sets of layers excepting embedding and hidden layers. A single LSTM achieved 0.617 accuracy on train and dev data. A single CNN achieved 0.606, a multichannel CNN 0.563, and a single CNN with LSTM 0.603. Our multi-channel CNN with LSTM outperformed all other architecture with a 0.640 accuracy. Table 2 presents the detailed results of our evaluation against the baseline algorithm. That our system achieved 0.647 accuracy on Subtask A is noteworthy, as the best score for this subtask was 0.651. The evaluation results revealed that our proposed system is considerably improved than the average baseline, which we attribute to our multi-channel CNN with LSTM architecture and distant supervision training. The proposed system can effectively Conclusion In this paper, we described our system submissions to the SemEval 2017 Workshop Task 4, which involved sentiment analysis in Twitter. The proposed multi-channel CNN-LSTM model combines CNN and LSTM to extract both local information within tweets and long-distance dependency across tweets. A large number of tweets with distant supervision were leveraged to pretrain the model. Officially released results revealed that our system outperformed all baseline algorithms, and ranked 14th on Subtask A, 10th on Subtask B, and 8th on MAE μ of Subtask C. In the future, we will attempt to enhance the tokenizer and model architecture to achieve an improved classification system.
2,657.2
2017-08-01T00:00:00.000
[ "Computer Science" ]
Ubiquitous Computing and Its Applications in the Disease Management in a Ubiquitous City The actual challenge in health is to manage patients with chronic diseases from a holistic approach where technology around the patient and at the city enhances their wellness. This paper deepens in the relations between health, devices, and models of technological cities and how these can be modeled to provide a more cost efficient solution while less invasive and more natural to the end users. In light of this, usable and accessible software and a wide range of devices, ranging from PC, smartphone, tablet and SmartTV have been tested. This manuscript will give good comprehension on how technology and disease management care models interact with the patient. Introduction Patients with chronic diseases, their management (process management) and their integral treatment (socio-health care) represent one of the main current challenges for health systems in any country [1]- [15].The advance in the management of chronic diseases and in the management of patients with multiple pathologies (patients with two or more chronic pathologies), requires a paradigm shift of our usual concepts of management of acute patients within the National Health Systems (NHS), and that the current conceptual frameworks are transformed, so that holistically, the citizens, their environment and their socio-health needs are the real center of the health system [16]- [24]. We are fully involved in the third great revolution of humanity, the Technological Revolution, initiated with the change of analogical, mechanical and electronic technology, to digital technology aimed at singularity, with radical changes caused by computing and ICT and where the raw material is information transformed into data, whether structured or unstructured.In medicine we are moving towards the singularity and the future of medicine lies in the multidisciplinary integration with genetics and molecular biology, biomedical engineering (biotechnology), artificial intelligence, robotics, and nanotechnology, in search of a Hospital without Barriers. Thanks to the great development of biosensors and ubiquitous computing and environmental intelligence the data to be managed, (biological data such as ECG, Sat 02, Blood Pressure, Heart Rate, Temperature...), can be digitally transferred.This allows making remote diagnoses using uHealth platforms that allow us to offer a set of applications for patients, and professionals in the socio-health sector.Through an intuitive interface, it is possible to access the different uHealth services that range from medical care with video consultations, e-check and hospitalization at home, tele-education services for health and clinical sessions between professionals.This can be done using videoconferences, or to be broadcasted in 3D using videostreaming, which allows drawing a real scenario of hospital without barriers, being key tools in the new models of integrated sanitary management and in the processes of socio-sanitary attention and care of people with dependence. In this scenario, the patient will be surrounded by autonomous sensors (scales, tensiometers, ECG recorders, and other devices that can be implanted in the skin or tissues, through environmental sensors of temperature, humidity, position, etc. that form networks ad hoc, be they BAN (Body Area Network), PAN (Personal Area Network) and/or HAN (Home Area Network), acquire and transmit all the information of interest. At the end of the 1980s, researchers at Xerox's Palo Alto Research Center (PARC) distanced themselves from personal computing-what they identified as being dominated by computers-into what they called ubiquitous computing, which "merges with the natural human environment and allows computers to vanish in the background" [25].In other words, they were interested in "invisible" computers that allowed us to focus on living beyond computational devices.According to Weiser, ubiquitous computing would not only free us from the limitations of desktop computing, it would also free us from insulating environments such as immersive or virtual reality environments.According to Weiser, from the perspective of design, ubiquitous computing concentrated more on cultural and social aspects than on technological ones [26].Unfortunately, Weiser died and although the concept of ubiquitous computing continued to expand, the social or cultural (non-technical) aspects are still insufficiently explored and represented in the design process (Figure 1).Journal of Computer and Communications In 1996, Weiser and Seely Brown predicted the "advent of calm technology" [27], being the main derivative from the perspective of the software of calm technology in the Internet of things (IoT).The concept of the Internet of Things is still in a discussion phase, where different authors propose different views of how the devices and information networks should be integrated.The initial vision of the Auto-ID Center was to "tag" all existing objects and people and manage them through ubiquitous computing.This system would be able to instantly identify any entity every "tagged element".The Internet of things must code from 50 to 100,000 million objects and follow the movement of these.Every human being is surrounded by 1000 to 5000 objects [28]. The concept crystallizes under the definition of Spime [29].Spimes are "things" located in the real world that have a unique digitally readable identity; they are traceable ; they can be searched from search engines; recyclable; designed and stored virtually and in many cases, they can be manufactured by the user and allow intelligence and interaction processes.Another alternative view, from the world of the semantic Web, focuses instead on making all things (not just tagged ones) have an address based on any of the existing protocols, such as URI.The objects, the things, do not converse, but in this way they could be referenced by other agents, such as powerful centralized servers that act for their human owners is what we call Ambient Intelligence. This vision is the one that is reigning today with the use of supercomputers The EnI vision places the person at the center of future developments.Moreover, technology must be developed to People, instead of people adapting to technology, EnI offers the possibility that in any everyday environment (home, moving on the street, in transport, in public places, in hospitals...) users may have integrated intelligence that facilitates daily life [29]. In the broadest sense, ubiquitous computing comprises any number of mobile, portable, distributed and context-sensitive computer applications.In this sense, ubiquitous computing could be the investigation of "how information technologies can be imbued in everyday objects and how they can lead to improvement and help for people's lives". Interesting types of ubiquitous computing are those that openly seek to create unique forms of habitable space and ways of living.These types, when used in daily life, bring out not only spatial aspects, but also temporalization and sociocultural aspects.The Internet of things represents a great advance in the challenge of data acquisition, while cognitive computing provides the intelligence necessary to the prediction of knowledge. Neuromorphic Computing & Cognitive Intelligence in Ubiquitous Cities The objective of cognitive computing is to simulate human thought processes in a computerized model.Using self-learning algorithms that use data mining, pattern recognition, and natural language processing, the computer can mimic the way the human brain works.When we speak of "cognitive", we refer to the ability to process information, learn, reason, memorize, solve problems and make decisions.We can say that we are using "cognitive computing" when machines imitate those cognitive functions to make them intelligent. Machine Learning is a subset of artificial intelligence that allows researchers, data scientists, engineers, and analysts to build algorithms that can learn and make predictions based on data.Instead of following a specific set of rules or instructions, an algorithm is trained to detect patterns in large amounts of data.In its most basic form, it is about making the machines learn, and in order for them to learn it is necessary to give them information, to provide them with data called "training data". Deep Learning is a type of machine learning that today represents the most advanced form of neural networks, artificial intelligence models based on the behavior of the nervous system.The algorithms of Deep Learning are very promising, but until today they have not been able to replicate the ordinary creative human capacity.It is then a bit premature to expect these algorithms to be creative. On the other hand, Machine Learning helps cognitive systems to learn, reason and participate with us in a more natural and personalized way.The more information and feedback you have from the users, the more intelligent the system will become.Journal of Computer and Communications Neuromorphic computing is the future, where it is expected to mimic the way the human brain works by replacing current circuits based on transistors with an architecture inspired by nerve cells or neurons. The benefits of this approach offer more than speed.Charles Augustine, a scientist at Intel Circuit Research Labs, suggests that "neuromorphic designs may need a calculation energy between 15 and 300 times lower" compared to the latest generation CMOS designs.It is the ideal technology for tasks based on analysis, such as data detection, adaptive AI, associative memory and cognitive computing.The low power neuromorphic hardware can be perfect for future supercomputing systems, especially as we move from the petascale era (machines measured in petaflops, i.e., a quadrillion calculations per second) to an exascale (machines calculated in exaflops, quintillion calculations per second). The implementation of computerized cities that adapt to the functional diversity of users, their interests and needs and that incorporate imbued intelligence are the best hope we have of achieving sustainable development for all without the need of compromising human intervention [30]. Figure 2 shows the different stages that have been traversed until the definition of this concept according to the infrastructure, services and the interaction that the latter offer with people. The Ubiquitous City aims to materialize a new way of understanding the way in which people interact with technology according to the concept of "calm technology" [27].With Ubiquitous Computing and the Ubiquitous City, one more step is taken in the evolution of human-machine interaction seeming human-human interaction.Regarding the infrastructure, the Ubiquitous City proposes a vision of the "Internet of Things" where the new objects in the technoculture universe have to be "Spimes" [29].That is, things that are located both in the urban space and in the domestic space, that have a unique digitally readable identity, are traceable, can be searched from search engines, recyclable designed and stored virtually and, in many cases, can be manufactured by the user. The Ubiquitous City is a futuristic city of the 21 st century that enables ubiquitous computing services with cognitive intelligence, resource management and access to information.This type of city maximizes the standard of living and the value of a region through the innovation of each function of the city.It merges high-tech infrastructure and ubiquitous services in the urban area.The Ubiquitous City is the highest level of information city that can innovate every function of the city in aspects such as the quality of urban life, safety, citizen welfare and the creation of new health management services.The Ubiquitous City is the convergence between construction, information technologies, content and cognitive intelligence. After the appearance of the Report on Our Common Future coordinated by Gro Harlem Brundtland within the framework of the United Nations [31], the objective of "sustainable development" was promoted, meaning that it allows "to satisfy our current needs without compromising the ability of future generations to meet theirs".The concept of Ubiquitous City adapts quite well to the sustainable management of our lives without generating the perception of a negative impact on the part of the final user (Ubiquitous Cities and Sustainable Cities). Currently all sustainable cities in planning are ubiquitous cities and there are currently no ubiquitous cities in planning that are not sustainable.Examples of this can be found among the eight most powerful projects of Sustainable Cities [32]: New Songdo, Dongtan, Masdsar City, Sino-Singapore Tianjin Eco-City, Sino-Singapore Nanjing Eco High-Tech Island, IT Valley Plan, Meixi Lake District, Sitra Low2No; of which New Songdo, Sino-Singapore Nanjing Eco High-Tech Island and IT Valley Plan, are born with the approach of being both ubiquitous. In this ubiquitous city environment, with the help of ubiquitous computing and current data management platforms, it becomes possible to manage chronic patients and Disease Management in an easy, effective and efficient way. Disease Management and Ubiquitous Computing The existing models of management of patients with chronic diseases that require processes and hybrid care, not only health but also social (socio-health processes) are few and very young in their approach, all deriving from the initial Chronic Care Model (CCM), developed by Ed Wagner and collaborators at the MacColl Institute for Healthcare Innovation in Seattle (USA), to improve the management of chronic diseases within the systems of integrated providers, such as the North American Group Health Cooperative and Lovelace Health System Journal of Computer and Communications [33].This Model of Chronic Care (CCM), recognizes that the management of chronic diseases is the result of the interactions of three overlapping areas: 1) The community or country as a group, with its health policies, its health model and its multiple public and private resources; 2) The health system, with its funding and provider organizations and public and private insurance systems; 3) Clinical practice or health care, primary and specialized, identifying those essential interdependent elements that must interact not only effectively and effectively, but also efficiently, to achieve optimal care for patients with chronic diseases. The ultimate purpose of this model is to place patients actively and well informed as a central element of a system that has a dynamic team of professionals with the necessary knowledge and experience (Figure 3). This model was the first widely disseminated system and the one that served as the basis for all subsequent models such as the later Extended Chronic Care eral key complementary contributions to the Chronic Care Model (CCM): 1) At the level of macro-management, the need for the existence of a unitary and strong socio-health policy that can redirect socio-health services and guide them towards the real needs of patients with chronic pathologies is highlighted, this being a key primary element of the model with a strong direction, and with both inter-territorial and intersectoral collaboration, which contributes to the model a real integration of policies, financial sustainability and the contribution of prepared and qualified human resources. 2) At the level of mesogestion, the attention in this model continues to be centered as that of Wagner, in the active role of community agents, but emphasizing the great importance of integration and coordination of social health services. 3) At the level of micromanagement, the established and existing interaction within the Chronic Care Model (CCM) between health professionals and patients (binary interaction) is broadened by involving the community and replacing the term "activated" with reference to the patients for the new patient term "motivated and prepared" (Table 1). Patients with multiple chronic diseases, multi-pathological patients, or those with long-term needs not only health but also social, with a functional deficit that prevents normal and adequate daily activities, becoming fully dependent, are patients that consume the largest volume of health resources in a country, so the creation of new management models valid for the management of this type of citizens remains a national challenge [39] [40] [41] [42]. In the historical moment we are living, the organization of health and socio-health care, aimed at the management, management and treatment of these multi-pathological patients is totally inappropriate, and it is necessary to promote and encourage the work of multidisciplinary teams (health professionals and health professionals).social services acting simultaneously and in a synergistic and coordinated way) so that it can be guaranteed a social and health care in a comprehensive manner and with equity, which provides a continuity of long-term care together with an improvement in the quality of care, a more rational use of human resources as well as structural and financial resources (efficiency) that contribute to an improvement in the quality of life of both patients and their family environment, all according to criteria previously defined in the respective Health Laws of the different SNS [43]- [49]. To achieve this purpose, it is necessary, already in an initial way, a classification to carry out the first step that is to define the identification criteria of those persons that are susceptible to inclusion in the need for integrated social health care, answering: 1) Which patients are subsidiary of integral health care? 2) What kind of personal and health care needs does this person need, depending on their family environment and socioeconomic status? A stratification based on the description of the Kaiser Permanente pyramid could facilitate the classification of these patients in three levels of intervention, according to their level of complexity.The group of patients that is located in the upper part of the pyramid, although they represent only between 3% and 5% of the cases, are the most complex and those that consume the highest portion of socio-health resources, so that it is necessary to assign these people comprehensive care plans designed "ad hoc", in order to reduce the unnecessary use of specialized resources, and especially to avoid hospital admissions (Figure 6).Regardless of the model chosen, what is clear is that models specifically designed to improve the management of chronic diseases and patients with multiple pathologies are needed, since there are no practical clinical guidelines that address multiple conditions, or that are designed to allow both primary and specialized care professionals to consider the individual circumstances and preferences of patients with multiple chronic diseases, through a multidisciplinary integration with professionals in the healthcare environment, and professionals in the social care environment [5] [7]. Apart from the management models, it is also necessary to create quality standards for these socio-health services for these patients with multiple chronic diseases, particularly in relation to the coordination of care, the education of patients and caregivers, training in the support of self-care taking in consideration of individual preferences and circumstances, it is necessary to incorporate tools to improve socio-health processes with the help of new technological tools, such as e-health, m-health, u-health, which require powerful platforms that support analysis and management of large amounts of data (Big Data and environmental intelligence (AmI), as well as through the development of services beyond the limits of the current health system [52] [53] [54]. For this purpose, Information Systems are needed, which provide us with data on both the patient and the health aspects, which we can obtain directly from the exploitation of the Minimum Basic Data Set (MBDS), as additional information on the most relevant social and socioeconomic aspects and that it makes it easier Journal of Computer and Communications for managers to make decisions. This Sociosanitary Information System (SISS) will allow us, on the one hand, to assess the inclusion criteria for the different offers of the socio-health services portfolio that exists at the moment, and on the other hand, to create and share a socio-health history integrated (HSSI), to obtain a specific social and health CMBD (CMBDSS), and to obtain risk adjustment systems that allow us to know the cost of socio-health processes in order to benchmark efficiency and quality and to allocate resources in function of the costs obtained in the different processes of the portfolio of socio-health services established. These Information Systems (SISS) will allow us as managers: 1) Register patients and assign them an identification code; 2) To facilitate the monitoring of the sociosanitary process with an ABQ (activity based quality) methodology focused on the quality of care, knowing, on the one hand, how the socio-health care processes are carried out (activity based Management, ABM) and on the other hand, the cost of each process (activity based costing, ABC) and all this taking as axis of the process the patient with sociosanitary needs that has been defined as susceptible person of inclusion; 3) Being able to follow up with a scorecard of the whole process; 4) Finally, carry out an evaluation and benchmarking with integrated socio-health information (health and social) that is available in an integrated socio-health history (HSSI), which is interoperable and accessible both from the social and from the health parcel. At the level of Health Systems, the Ministry of Health has information systems, and usually has a reference catalog of Social Services along with a database of beneficiaries and users of services, but to get an effective method for planning, efficiency and analysis is necessary to complement them with: 5) A Unique Social History (HSU). 6) A classification of patients according to their etiology and needs: GRASS. 7) A specific social CMBD (CMBDS). 8) A risk adjustment system through an Aggregate Set of Social Attention (CAAS) that allows us to know the cost of social processes in order to make comparisons (benchmarking) of efficiency and quality and allocate resources based on the costs obtained in the different processes of the social services catalog.9) Integrated Social Assistance Guidelines (GIAS) for the standardization of processes.14) Be able to follow up with a scorecard of the whole process. 15) Conduct an evaluation of the effectiveness of social actions, having integrated social information. MIGRAS wants to satisfy the following global objectives: 1) To develop an integral model of classification and grouping by need and consumption of social attention. 2) Extend a culture of balance and analytical control in the allocation of human, material and economic resources and in the adjustment of risks in social care, with special emphasis on dependency. 3) Establish parameters to predict the use of services and their budgetary adjustment, both in the financing of public and private institutions. 4) Value the introduction of redistribution elements in dependency budgets. 5) Identify elements of improvement in the budget distribution by geography, benefits… These objectives can be formulated in an operative way as it is related below: 1) Calculate the magnitude and variability of the social care burden, expressed in Aggregated Sets of Social Care (CAAS), based on information from the information systems of the Ministry of Health, Social Services and Equality, and from which obtain by applying a user classification through the GRASS. 2) Obtain the average relative weights of the cost of assistance for each of the CAAS. 3) Stratify the use of resources in bands of resource use (BUR) that also allow to distinguish the proximity or risk of a jump in the degree of dependence and to calculate efficiency indices in the use of those resources. 4) Construct a general index to evaluate the efficiency in the budgetary distribution of resources.The index will include the various elements of the measurement of the quality of care, using structure data (expenditure at the primary and specialized levels), process (efficiency indices in resource utilization) and result (data to evaluate the quality of the indicators). As specific objectives of the model we establish: 5) Establish the method to know the intragroup variability of the GRASS ob-Journal of Computer and Communications tained according to the CMBDS coding of the social "highs" in the Social Care System and Social Stories and thus be able to group them in CAAS. 6) Approach a process management model based on their standardized design, reflected in the Integrated Social Care Guidelines (GIAS). 7) Define a methodological framework that will have to be applied to the allocation of costs of each of the GRASS: 8) Approach a model for the evaluation of the quality of social processes with tools that allow the sharing of results in the interests of quality and efficiency.12) Next, the processes with ABQ methodology (Activity Base Quality, Activity-based Quality) are studied.This study will lead to the Integrated Social Care Guidelines (GIAS); and on these guides the Model of Assignment of Costs to the Processes of Social Attention (MACPAS) will be applied.The latter will be the basis for applying comparative models and evaluation of the quality of care.This can be done with the Tool for the Evaluation of Social Care Results (HERAS). From a sample of individuals, and variables such as: index of socioeconomic deprivation, consumption of resources at the primary level and specialized level, degree of disability and its temporal variation, a regression model could be made, obtaining a formula that simulates the spending on social care (dependence, risk of exclusion...) and its prognosis. 13) One of the fundamental elements of the MIRAS model is the tool for the allocation of costs to social assistance processes and very useful to assess the cost of each of the CAAS and finally, to establish and perform comparisons and benchmarking. 14) It constitutes the module of allocation of costs derived from the GRASS and CAAS by citizen for the different social cost centers.For this, it implements an accounting method that allocates the costs with an arborescent detail according to the total expenditure in a certain period of time. 15) MACPAS is the tool that allows modeling and defining a system of costs associated with social care processes.As in the field of health and in other areas, the need for a cost model becomes clear.For this reason, a cost-setting system based on the ABC methodology (Activity Based Costing) is introduced as support for social care processes. 16) Its philosophy is based on the principle that activities are really the causes that determine the consumption of resources and the subsequent costs.In the social field, this system identifies the activities carried out in the process and employs cost drivers (cost drivers) that allow the costs of these activities to be transferred to the different products or services based on the social processes carried out.In this way, the centers or departments incur costs as they carry out activities and the cost of the products is the result of the consumption of the activities necessary to obtain them.Therefore, excellence in cost management will require excellent management of activities.The principles of ABC systems can be applied to any type of organization and fit perfectly with the latest management trends, such as quality management models and reengineering or process redesign. All these data contained in the Unique Social History (HSU), together with the classification of patients according to their etiology and needs (GRASS) obtained from the analysis of the specific social CMBD (CMBDS), and through a system of risk adjustment managed through an Aggregate Set of Social Attention (CAAS) that allows us to know the cost of social processes to be able to make comparisons (benchmarking) of efficiency and quality and allocate resources according to the costs obtained in the different processes of the catalog of social services, Later, it allows us to carry out Integrated Social Care Guidelines (GIAS) for the standardization of processes. Digitalizing the Chronic Patient and the Dependant Siesta TV is a cognitive accessible and usable platform of Ambient Intelligence (AmI), for the third generation of television using the Internet (IPTV), which allows the user bi-directionality and interactivity with multiple services in sectors such as health, leisure, culture and public services [55] [56].Since in Sies-taTV Cognitive the services can be consumed in different devices [57].Siesta-Care, allows offering a set of applications for patients, and professionals in the social-health sector.Through the intuitive interface, you can access the different uHealth services ranging from medical care with video consultations, e-check and hospitalization at home, to tele-education services for health, and clinical sessions between professionals with video ponencias, or broadcast in 3D by videostreaming, which allows to draw a real hospital scenario without barriers. SiestaCare is based on new interactive digital TV (IPTV) systems, this being a key tool in the new models of integrated health management and in the processes of socio-health care and care of people with dependence. An example of this type of device is the iFreeTablet (Figure 7) and the Siesta Operating System [56] [58]. SiestaTV has several service verticals that can be reinterpreted to fulfill other SiestaTV establishes a series of regulations and standards for accessible, usable and intelligent interactive digital TV and, as a consequence, the development of a series of author tools that allow the automation of the production processes of interactive multimedia programs for the IPTV, for consumption on any device and in mobility, as shown in Figure 8. Siesta TV is a platform of Assisted Living in the Environment (AAL) that through techniques of Ambient Intelligence (AmI) and Cognitive Intelligence (CI) allows its integration in an environment of Ubiquitous City where the agile communication between users, the internet of things, advanced telematic services and new devices is done naturally, automatically, integrated and non-invasively or required. The users are all citizens, with special attention and sensitivity towards the elderly and dependents, and on the other hand the agents that provide them with services: Public Administration, non-profit organizations and private operators, including caregivers and the family environment.However, Siesta TV contributes to the improvement in the accessibility to advanced services to the public. The system allows spaces, objects and human activities to be managed in a usable and accessible manner, both in open spaces and closed spaces, in order to empower a citizen and the elderly in their natural surroundings, as well as to the lending agents of services (citizens, ...), although all are located in distributed spaces. The platform and systems are imbued with the best practices of the industry to facilitate living according to the paradigm of active aging promoting future healthy seniors and providing greater autonomy to currently dependent people, the elderly today, their caregivers and your family environment Thus, new systems and interaction devices will be developed in the urban area, existing ones in the domestic sphere and all new devices will be integrated and the interfaces created will be accessible, usable and adaptive. In short, it is proposed from the creation of the Ubiquitous City AAL platform to establish a series of regulations and standards for the ubiquitous friendly city accessible in general and as a consequence, the development of a series of tools that allow the automation of process and content generation techniques and their integration with spimes connected to the internet and people. It allows, facing other systems, to enjoy full mobility without being subject to fixed devices or web environments, as well as to individualize the exact information to be transferred.In addition, the system is ready for the dissemination of intelligent information, such as value-added services.As a result, the system, in addition to the city services, may also send information of interest to the user who is sensitive to the context. Siesta TV key value, is not only in its applicability in a new concept of future city, but lies in the multiplicity of derivative applications that can be generated from the developments that are obtained and that may be hosted in a multitude of economic areas generating new sources of value, and that in this project are focused on the field of Health, Culture, Sustainability and Public Services. Siesta TV is focused on people, so all kinds of people with diversity are taking into account, people with special needs, people at risk of exclusion, etc.With special interest in the integration of services for the elderly in rural and urban areas.This system will enable the ability to integrate services-social and health resources, achieving real coordination.On the other hand, the system can serve Journal of Computer and Communications to provide the service of prevention of dependence and promotion of personal autonomy whose development is in process and that has sometimes been linked to telecare although they are different services. This solution will allow the municipalities, centers or users that use it, to integrate into the global movement of Friendly Cities with the Elderly and incorporates the vision of equity in the rural and urban areas of the national and international municipalities.First platform in the cloud that allows achieving that all the elements can be perfectly "integrated", "connected" and "interoperable". Siesta TV has been designed around eight lines of specific research, structured and formally coordinated: 1) Reference models in the care of patients and dependence on international and national experience. 2) State of the art of information technologies applied to sick, elderly or disabled people. 3) Process and product development methodology. 4) Improvement and automation of the systems of continued geriatric socio-sanitary assistance. 6) Improvement and automation of home help systems.Although many of these services are currently offered independently by other solutions, their fragmentation does not make possible their social "domestication" (or occurs very slowly).Siesta TV offers all the services architecture through the most used visualization and interaction devices currently: digital television, tablets and smartphones.To the extent that said service architecture is offered through a device perfectly assimilated and "domesticated" by society, Siesta TV intends to take advantage of the a priori acceptance of this device to introduce the services offered as part of daily activities (as it is already in fact the use of television).All the stakeholders around the chronic patient and the elder Journal of Computer and Communications people have different needs of services.The most important of them to the patient are the caregivers.Siesta TV offers a range of services that cover needs of services of patients and caregivers (Figure 9), allowing the integration of the patients in the community, as the ICCC, ECCM and CCM models propose. The users of Siesta Care will be given by categorizing them using the Kaiser pyramid.Those in the lower part of the pyramid will need technologies to access to socio-sanitary educational video content, FAQs, videoconference, social networks.Those in the upper part of the pyramid will need more technologies such as remote measuring, control of the adherence to treatment, domotics, emergency button, etc. As we have discussed, technology is available to digitize many of the costly processes for health systems that chronics and dependents require.The biggest challenge is to map the technology to these processes.Using MIGRAS, it is possible to determine if a service is suitable in terms of quality, cost and effectiveness in a particular Health System, for a particular patient with particular chronic disease/s.This analysis is outside the scope of this paper but in any case, previous to a exhaustive calculation, most of the technologies of Siesta TV, will cover most of the required services to be provided. Siesta TV is based on the design of interactive person-TV interaction interfaces and supported by Advanced Environmental Intelligence Technologies 1) Access to general information of interest. 2) Access accessible and adaptive training resources, ensuring an individualized remote monitoring of the learning and understanding process. 3) Control of the home automation system installed in the home or hospital room. 4) Participation in a collaborative space with professionals and other users of similar interests. 5) Task planning system to establish standard telephone communications, IP telephony or videoconference with family, friends or with an external "contact-center". 6) System of remote supervised consultations, adoption of healthy life habits programs. 7) Automatic help system for decision making. Conclusions In this scenario, managing the chronic patient will be achieved by surrounding him/her by autonomous sensors (scales, tensiometers, ECG recorders, and other devices that can be implanted in the skin or tissues, through environmental sensors of temperature, humidity, position, etc. that forming ad hoc networks, be they BAN (Body Area Network), PAN (Personal Area Network) and/or HAN (Home Area Network), which will acquire and transmit all the information of interest.Actors involved in health care can act in synergy, improving the quality of care and the efficiency of the Health System, which are the main challenges facing current medicine and the immediate future.Ubiquitous omnichannel experience will be also fundamental to allow all the different stakeholders to interact with the systems and the people. The key will facilitate personal autonomy and increase the quality of life of people with functional diversity trying to extend the new values of e-Inclusion based on neuromorphic computing and Cognitive Computing platforms such as IBM's Watson.In recent years researchers on issues related to information technology and communications have defined a term called "Ambient Intelligence" or Environmental Intelligence (EnI).The concept of Environmental Intelligence poses a new way of understanding the way people interact with technology.It describes an environment that perceives, adapts and responds to the presence of people.Journal of Computer and Communications It places the person as the center of the new city and the new services.It allows the best practices for active aging by using Environmental Intelligence and Cognitive Intelligence: The person is located at the center of development on the daily environment, facilitating daily life and improving their quality of present and future life. Figure 2 . Figure 2. Evolution of the U-City. Figure 5 . Figure 5. WHO, innovative care framework for Chronic Conditions, 2002.Source: WHO, Innovative Care for Chronic Conditions (own adaptation). 10 ) Cost Assignment Model for Social Care Processes (MACPAS).11) Tool for the Evaluation of Results in Social Care (HERAS).All this frame of reference of models, tools and methodologies is what we call MIGRAS: Integral Model of Management and Results in Social Attention.The MIGRAS model follows the pattern initiated in health, which has led to products such as DRGs (Related Groups for Diagnosis), GCAs (Adjusted Clinical Groups) and similar ones; it looks for common aspects in the users of the social services Journal of Computer and Communications that allow the establishment of classifications that allow grouping the attention processes, providing them with indicators and being able to compare their performance and the effectiveness of their results.In addition, one of the expected benefits is to have information and tools to forecast future needs.This set of systems and tools will allow us to: 12) Register users of social services univocally and have a comprehensive view of the services they use and their family environment or coexistence.13) Facilitate the monitoring of the social process with an ABQ methodology (Activity Based Quality) focused on the quality of care, knowing on the one hand how the processes of social care (Activity Based Management, ABM) are carried out and on the other hand, the cost of each process (Activity Based Costing, ABC). 9 ) Construct a theoretical model to evaluate the efficiency in the distribution of resources, called the General Efficiency Index (EG).This model integrates structural variables (budget of personnel expenses and general expenses), process (indexes of efficiency in resources used in the calculation of GRASS / CAAS) and result (results of evaluation of a battery of indicators).10) We start from the premise that it is possible to delimit certain homogeneous groups of users of social services based on a series of parameters, such as characteristics of the need or social problem involved, severity of the situation and type and intensity of services social services, and according to these, calculate the expense they generate.11) For this, we must first classify all the users and for this we follow the reference of the Social Services Catalog: this gives rise to the GRASS.A later grouping, weighing this by the isoconsumo, leads us to obtain some sets of similar behavior that denominated CAAS (Aggregate Sets of Social Attention). 7 ) 8 ) Learning in Dependency Management and assistance to patients for informal caregivers, formal caregivers, technicians and end users.Application of the previous sub-projects to a set of seven pilot floors and a nursing home, integrating the e-services and aids technologies.Siesta TV is an ecosystem, a consequence of the development of digital communication.In Web 2.0 and its most recent versions Internet users are granted the status of prosumers-prosumers represents a contraction of the words producers and consumers.Prosumers can articulate new communication environments that are the result of the evolution of mobile digital communication.This revolution has transported us to a new socio-sanitary-cultural ecosystem: the ubiquitous society, only some lucky ones will be able to adapt quickly enough to be invited.Siesta TV, being an interactive ecosystem, accessible, usable, intelligent, ubiquitous and adaptive will allow access to any type of person, regardless of their diversity and economic situation, to adapt to this new paradigm of social integration. Figure 9 . Figure 9. New paradigm of digitalization of services on Siesta TV. 8 ) 9 ) System of continuous monitoring of vital parameters that facilitates programs of rehabilitation and early detection or rehabilitation of cognitive problems.Chronic disease monitoring: Situation monitoring and control system.10) Adaptive checkup systems.11) Dependency Prevention and Treatment System.And horizontal features as: 12) Ubiquity: Allows the user to use the system anywhere, anyhow. Table 1 . Key elements of the ICC model. EMPHASIS ON QUALITY OF CARE AND SYSTEMIC QUALITY FLEXIBILITY/ADAPTABILITY INTEGRATION AS THE HARD AND FRACTAL CORE OF THE MODEL Journal of Computer and Communications
9,307.2
2018-03-08T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Projecting RNA measurements onto single cell atlases to extract cell type-specific expression profiles using scProjection Multi-modal single cell RNA assays capture RNA content as well as other data modalities, such as spatial cell position or the electrophysiological properties of cells. Compared to dedicated scRNA-seq assays however, they may unintentionally capture RNA from multiple adjacent cells, exhibit lower RNA sequencing depth compared to scRNA-seq, or lack genome-wide RNA measurements. We present scProjection, a method for mapping individual multi-modal RNA measurements to deeply sequenced scRNA-seq atlases to extract cell type-specific, single cell gene expression profiles. We demonstrate several use cases of scProjection, including identifying spatial motifs from spatial transcriptome assays, distinguishing RNA contributions from neighboring cells in both spatial and multi-modal single cell assays, and imputing expression measurements of un-measured genes from gene markers. scProjection therefore combines the advantages of both multi-modal and scRNA-seq assays to yield precise multi-modal measurements of single cells. Statistics For all statistical analyses, confirm that the following items are present in in the figure legend, table legend, main text, or or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as as a discrete number and unit of of measurement A statement on on whether measurements were taken from distinct samples or or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of of all covariates tested A description of of any assumptions or or corrections, such as as tests of of normality and adjustment for multiple comparisons A full description of of the statistical parameters including central tendency (e.g. means) or or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or or associated estimates of of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of of freedom and P value noted Give P values as exact values whenever suitable. For Bayesian analysis, information on on the choice of of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of of the appropriate level for tests and full reporting of of outcomes Estimates of of effect sizes (e.g. Cohen's d, Pearson's r), ), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above. Software and code Policy information about availability of of computer code Data collection Data analysis For manuscripts utilizing custom algorithms or or software that are central to to the research but not yet described in in published literature, software must be be made available to to editors and reviewers. We We strongly encourage code deposition in in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information. Data Policy information about availability of of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or or web links for publicly available datasets -A description of of any restrictions on on data availability -For clinical datasets or or third party data, please ensure that the statement adheres to to our policy Double-anonymous peer review submissions: write DAPR and your manuscript number here instead of author names. YYYY-MM-DD No No software was used for data collection. The scProjection framework was implemented in in the 'scProjection' Python package, which can be be installed through PyPI (https://pypi.org/ project/scProjection/), and the code is is available at at https://github.com/quon-titative-biology/scProjection. The data preprocessing and analysis of of results were done using R 3.6.1 and R 4. CIBERSORTx was run from their website (https://cibersortx.stanford.edu/), which did not provide versioning information at at runtime. All data analyzed in in this article are publicly available through online sources. The gene count matrix for the RNA mixture experiments in in CellBench is is provided in in the nature portfolio | reporting summary April 2023 Research involving human participants, their data, or biological material Policy information about studies with human participants or human data. See also policy information about sex, gender (identity/presentation), and sexual orientation and race, ethnicity and racism. Reporting on sex and gender Reporting on race, ethnicity, or other socially relevant groupings Population characteristics Recruitment Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript. Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf Life sciences study design All studies must disclose on these points even when the disclosure is negative. Data exclusions Replication Randomization Blinding Reporting for specific materials, systems and methods We require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response. R data file that is available at https://github.com/Shians/CellBench. The gene count matrix of the bulk-RNA experiments and IHC measurements for the ROSMAP-IHC benchmark can be found at https://github.com/ellispatrick/CortexCellDeconv. Mouse Primary Motor Area (MOp) and the mouse primary visual cortex (VISp) scRNAseq datasets are from the Cell Types Database of the Allen Brain Map (https://portal.brain-map.org/atlases-and-data/rnaseq/mouse-aca-and-mop-smart-seq, and https://portal.brain-map.org/atlases-and-data/rnaseq/mouse-v1-and-alm-smart-seq, respectively). We obtained the gene count matrix for the mouse brain atlas described in Yao et al. and Tasic et al. from the Allen Institute Cell Types database: RNA-Seq data page on the Allen Institute's webpage (https://portal.brainmap.org/atlases-and-data/rnaseq We applied and benchmarked scProjection using seven publicly available, bulk-(or bulk-like) expression datasets from diverse assays, including bulk RNA sequencing, RNA imaging-based MERFISH, LCM-seq, and Patch-seq. Our results are therefore based on reasonable sample sizes. No additional sample filtering steps were applied to the datasets obtained from the public domain. As the entire paper is based on computational analysis, reproducibility of results was ensured by repeating our analysis scripts to ensure the same results were reproduced for each figure. Randomization was not performed in this study. In each experiment, scProjection is trained on an entire dataset of bulk-(or bulk-like) RNA samples, and is only given a sample single cell reference dataset to help train the VAEs. Because the bulk RNA samples are not labeled in any way when input into scProjection, there was no need to define a training/testing split of the dataset. Blinding is not relevant to this study, as there was no explicit randomization.
1,576.4
2023-08-25T00:00:00.000
[ "Computer Science", "Biology" ]
The Kinetics Investigation of CO 2 Absorption into TEA and DEEA Amine Solutions Containing Carbonic Anhydrase : Tertiary amines have been used as alternative absorbents for traditional primary and secondary amines in the process of carbon capture. However, the carbon dioxide (CO 2 ) absorption rates in these kinds of amine are relatively slow, which implies greater investment and construction costs and limits the large-scale application of carbon capture. Carbonic anhydrase (CA) is considered to be an ideal homogeneous catalyst for accelerating the rate of CO 2 into aqueous amine solution. In this work, CO 2 absorption combining CA with two single aqueous tertiary amines, namely triethanolamine (TEA) and 2-(diethylamino)ethanol (DEEA), was studied by use of the stopped-flow apparatus over temperature ranging from 293 to 313 K. The concentrations of selected aqueous amine solution and CA used in the experiment were ranging among 0.1–0.5 kmol/m 3 and 0–50 g/m 3 , respectively. Compared to the solution without the addition of CA, the pseudo first-order reaction rate in the presence of CA ( k 0, with CA ) is significantly increased. The values of k 0, with CA have been calculated by a new kinetics model. The results of experimental and calculated k 0, amine and k 0, with CA in CO 2 -amine-H 2 O solutions were also investigated,respectively. Introduction Carbon dioxide (CO 2 ) absorption using chemical absorption with traditional aqueous amine solution, such as monoethanolamine (MEA) and diethanolamine (DEA), is a terminal processing technology that can be deployed on a large scale in coal-fired power plants [1]. However, there are a number of existing drawbacks, such as very high energy consumption during CO 2 desorption process [2]. These energy costs account for approximately 80% of the operating costs [3]. In addition, in the presence of oxygen the amine solution is easily to degrade and produce harmful volatile organic compounds resulting in the loss of solvent and even corrosion problems [4]. With the deepening of research, tertiary amines such as methyldiethanolamine (MDEA), triethanolamine (TEA) and 2-(diethylamino)ethanol (DEEA) were proposed as alternative solvents for MEA due to their lower energy consumption for carbon capture [5]. Though there is lower energy consumption for the desorption process in these kinds of tertiary amine solvents, the relatively slow absorption rate of CO 2 in the absorber means that the better absorber is needed to meet the demand for carbon capture. Carbonic anhydrase (CA) can very easily accelerate the reaction of CO 2 hydration and is considered to be a promising homogeneous catalyst for CO 2 absorption in carbon capture process [6][7][8]. In this work, the kinetics of CO 2 reacting with TEA and DEEA solution was studied with the use of stopped-flow apparatus. The experimental kinetics data of CO 2 with aqueous amine solution catalyzed by CA over temperature ranging from 293 to 313 K were collected. The concentration of CA in the aqueous amine solution with its concentration ranging among 0.1-0.5 kmol/m 3 varies in 0-50 g/m 3 . The base-catalyzed hydration mechanism is selected to interpret the reaction mechanism of CO 2 reaction with aqueous amine solutions. The collected experimental data is fitted by the empirical power law equation to determine the possible reaction order in each reaction system. The kinetic constants under the catalysis of CA were also investigated and the results showed that the pseudo first-order reaction rate (k 0,amine ) catalyzed by CA is significantly increased as compared to the absence of CA. A new kinetics model of CA-catalyzed absorption of CO 2 in aqueous tertiary amine solutions was established and used to compare with experimental date. Chemicals TEA and DEEA are tertiary amines ordered from aladdin Industrial Corporation (Shanghai, China). Their molecular structures are listed in Figure 1. Bovine carbonic anhydrase (BCA) was obtained from Sigma-Aldrich Trading Co., Ltd. (Shanghai, China). The basic information for the chemical reagents are given in Table 1. The desired solution concentration is obtained by confirming the dilution of the chemical reagents with DI-water. Experimental Procedure In an experimental case, the sample of desired CO 2 absorbent was prepared by dissolving pre-weighed amine solvent into deionized water with a certain amount of CA. The pure CO 2 gas was bubbled into DI-water for a certain period of time to obtain the saturated aqueous solution of CO 2 . The k 0,amine of CO 2 absorption into the aqueous amine solution with or without CA was obtained by the stopped-flow apparatus. The schematic diagram of the stopped-flow apparatus is shown in Figure 2 and its detailed operational description can be found in our previous work [6,9]. In a typical experimental operation, equal volumes of fresh CO 2 and amine solutions are pushed into the conductance cell by a gas-driven source. A change in the conductivity signal is observed in the cell since the reaction of CO 2 with aqueous amine forms new ionic products. Then the detected signal tends to stabilize before the absorption reaction is complete and reaches equilibrium. The kinetics constant can be obtained by the fitting curve of the conductivity changing with time at different temperatures as listed in our previous work [10]. Each measurement was repeated at least 7 times to ensure the accuracy of the collected data. The effectiveness of the test set-up has been demonstrated in a good consistent with that of Ali et al. [11,12] in our previous work [13]. Reaction Kinetics of CO 2 Absorption The catalyzed hydration mechanism proposed by Donaldson and Nguyen [14] is commonly used to explain the absorption mechanism of CO 2 in tertiary alkanolamine solution (R 3 N). In the process of gas-liquid two-phase contact, the performance of tertiary alkanolamine is like a catalyst that catalyzes the hydration of CO 2 . The solvent itself does not react directly with CO 2 . Expressions of the reaction mechanism in CO 2 -amine-H 2 O system are given in Equations (1)-(3): Then, the overall rate constant (kmol/m 3 ·s −1 ) of CO 2 absorption in aqueous amine solution can be expressed as: In most of the published studies, the contribution of CO 2 reacting with OHand H 2 O shown in reactions 2-3 to the overall absorption process can be neglected [15,16]. Then the reaction kinetic constants (k 0,amine ) throughout the CO 2 absorption process is completely determined by reaction 1. Subsequently, the pseudo first-order equation shown in Equation (5) can be used to describe the reaction between CO 2 and tertiary amine solution. CA was first found in red blood cells [17] and has been proved to be a very efficient catalyst for the conversion of CO 2 to HCO -3 . CA is a broad group of zinc metal-proteins (enzymes) existing in three genetically unrelated families of isoforms (α, β, γ). The mechanism of CO 2 hydration catalyzed by CA was introduced in detail by Lindskog et al. [18] as shown in Equations (6)-(8): Results and Discussion All experimental data are given in Appendix A. CO 2 -Amine-H 2 O System The reaction kinetic data between CO 2 and aqueous tertiary amine (TEA and DEEA) solution with temperatures ranging from 293 K to 313 K was listed in Table A1. In the CO 2 -tertiary amine-H 2 O system, amine itself is the major contributor to the reaction and the contributions of OHand water can be negligible [15,16]. Then, the base-catalyzed hydration mechanism (Equations (1)-(3)) was used to interpret the possible reaction mechanism of CO 2 when reacting with aqueous TEA and DEEA solutions. The collected kinetic data shown in Figure 3 (Table A1) was fitted by Equation (5) to verify the possible reaction order of CO 2 absorption in amine solution at the temperatures of 293, 298, 303, 308, 313 K, in which the reaction order is approximately equal to one (1.30-1.36 for TEA and 0.83-1.0 for DEEA) with respect to amine concentration over temperature ranging from 293 to 313 K, respectively. As can be observed from Figure 3, the pseudo first-order rate constants show a growing trend with increasing concentration and temperature of amine solution. Under experimental temperature, the reaction order with respect to amine concentration was approximately equal to 1 by fitting k 0,amine values using the power law Equation (9). where n is the reaction order. The above obtained k 0,amine values listed in Table A1 were fitted with Equation (10) in order to get k 2,amine values of these amines. All the fitted results of k 2,amine for TEA and DEEA are given in Table A2. It can be easily found that the k 2,amine value is just a function of temperature. It is widely accepted in published studies that k 2,amine can be fitted according to the Arrhenius expression: where A is Arrhenius constant or pre-exponential constant (m 3 /kmol·s −1 ), and E a represents the activation energy (kJ/mol) and R represents the universal gas constant (0.008315 kJ/mol·K −1 ). The corresponding Arrhenius equation for k 2,amine with TEA was correlated according to Equation (10) CO 2 -Amine-H 2 O Containing CA The collection of kinetics data was operated almost the same as Section 3.1, except that a quantitative enzyme is added to the amine solution. The concentration of CA was over a range of 0-50 g/m 3 in 0.2 kmol/m 3 aqueous TEA solutions. The results of collected kinetic data was listed in Table A3 and fitted by Equation (14) shown in Figure 6. Compared with the obtained results of Figure 3, it found that the k 0,amine value was obviously accelerated by the existing of CA in the aqueous amine solution. There is a non-linear relationship between k 0,with CA and CA concentration, which is inconsistent with the test results provided by Alper and Deckwer [19]. The rate constants were well fitted by a linear regression at low amine concentration and reached a maximum value and not changed at higher enzyme concentration. The similar phenomenon was observed in the work of Van Elk et al. [20] on the reaction of CO 2 with DMMEA catalyzed by a thermostable variant of the human carbonic anhydrase (5X CA) provided by CO 2 Solutions Inc. However, the increase of overall reaction rate constants in the DEEA soulution were not significant. This is because the enzyme shows better activity in a near neutral environment and the pKa value of DEEA has a key impact on the catalytic activity of enzyme. Similar conclusions were reached by Wilk et al. [21] when using N-methyldiethanolamine (MDEA) and the activator in the form of piperazine. Subsequently, the overall catalyst enhancement (CE) was defined in enzyme catalyzed CO 2 absorption system, which represented the ratio of reaction rate constant as listed in Model 1 (CE = k 0,with CA −k 0,without CA k 0,without CA ) and another correlation one marked as Model 2 (CE = k 0,with CA k 0,without CA ), which is used to correct the experimentally determined k 0,with CA values in CO 2 -MDEA system using human carbonic anhydrase (HCA) as additive agent introduced by Penders-van Elk et al. [8] However in order to further study the effect of CA on the k 0,amine , the influence of amine itself needs to be removed. The termed Model 3 listed in Equation (14) was selected in this work, in which the reaction rate constant of CO 2 reacting with hydroxyl ion in present of enzyme was considered to exclude the effect of amines on CA activity, which is different from Model 1 and Model 2. It is worth noting that both above Model 1 and 2 were not selected in this work but Equation (14) was used for data fitting as listed in our previous work on MDEA and DMEA containing carbonic anhydrase [6]. Table A4 listed the kinetic data of CO 2 absorption in various amine solutions at temperatures from 293 to 313 K and the obtained results as shown in Figure 7 show that there is a linear trend in both aqueous solutions of TEA and DEEA. In CO 2 -DEEA-H 2 O system, the CA shows partial activity at lower concentrations and no obvious regular catalytic activity in this amine system, which means the model of Equation (14) is not suitable for this system. This phenomenon is likely because the overall absorption rate is not limited by enzymatic turnover once the enzyme is sufficiently concentrated. (14) where k 0,with CA is reaction rate constant of CA-CO 2 -amines-H 2 O system, k 0,amine is the reaction rate constant of CO 2 -amine-H 2 O system, and k OH− represents the reaction rate constant of CO 2 reacting with OHin the solution which can be found in the work of Pinsent et al. [22]. k 3 and k 4 are parameters of Equation (14) as a function of temperature following Arrhenius relationship. The catalyst enhancement value reflects the extent to which CA promotes the CO 2 absorption into the amine solution compared to that uncatalyzed reaction. Equations (15) and (16) is the results of the intercept values in Figure 8 fitted in an Arrhenius relationship. In this work, the reaction rate constant is defined as k OH−CA,amine . The reaction kinetics fitting of CO 2 reacting with aqueous solution of different amines including TEA were performed. Equation (20) can be obtained combing Equations (4) and (14). The k 0,without CA can be replaced by k 2,amine multiplying amine concentration. The values of the calculated k 0,amine shown in Equations (19) and (20) are plotted versus the experimental values in Figure 9. It showed a good performance in predicting k 0,with CA values of CO 2 -TEA-H 2 O solution with the AADs of 23.38%. Conclusions The stopped-flow apparatus was used in this work for the acquisition of the kinetic data between CO 2 and aqueous TEA or DEEA solutions. With this useful kinetics data, the possible reaction mechanism of CO 2 absorption in amine solutions can be inferred. By comparing the experimental k 0,amine values with that of the calculated results, it was found that the AADs were 10.80 and 9.95% in CO 2 -TEA-H 2 O and CO 2 -DEEA-H 2 O system, respectively. This obtained results proved that the based catalyzed mechanism was suitable to interpret the reaction pathway of CO 2 absorption into aqueous TEA and DEEA solutions. Adding a certain amount of CA to the aqueous solution of TEA and DEEA can significantly improve the absorption rate of CO 2 . With increasing amine concentrations, temperature and CA amounts, the reaction rate constant was obviously improved. Then, an catalyst enhancement (CE CA,amine ) is introduced to express the catalytic activity of CA in CO 2 absorption using TEA and DEEA solutions. The results fitted by the new kinetics model showed reasonably good performance in predicting results of pseudo first-order reaction rate constant (k 0,with CA ) in CO 2 -TEA-H 2 O with the AADs of 23.38%. While in CO 2 -DEEA-H 2 O system, the CE model was not applicable due to the high pKa value of DEEA.
3,502.8
2021-11-26T00:00:00.000
[ "Chemistry", "Environmental Science" ]
The shocks in Josephson transmission line revisited We continue our previous studies of the localized travelling waves, more specifically, of the shocks and the kinks, propagating in the series-connected Josephson transmission line (JTL). The paper consists of two parts. In the first part we calculate the scattering of the"sound' (small amplitude small wave vector harmonic wave) on the shock wave. In the second part we study the similarities and the dissimilarities between the shocks and the kinks in the lossy JTL. We also find the particular cases, when the nonlinear equation, describing weak travelling wave in the lossy JTL can be integrated in terms of elementary functions. I. INTRODUCTION The interest in studies of nonlinear electrical transmission lines, in particular of lossy nonlinear transmission lines, has started some time ago [1][2][3] , but it became even more pronounced recently [4][5][6][7] .A very recent and complete review of studies of nonlinear electric transmission networks one can find in Ref. 8 . We studied previously the shock waves in the lossy Josephson transmission line (JTL) JTL 9,10 and kinks (and solitons) in the lossless (actually, without any shunting at al) JTL 10 .The present work had several aims.First we would like to analyse the interaction between the "sound" (small amplitude small wave vector harmonic wave) and the shock wave.Second we would like to establish the relation between the shock waves and the kinks.And third, we would like to additionally study the weak shock waves, and, in particular, to look for the cases when the nonlinear equation, describing weak travelling waves in the JTL, can be integrated analytically. The rest of the article is constructed as follows.In Section II we rederive the circuit equations describing the JTL in the continuum approximation.In Section III we consider scattering of the "sound" wave by the shock wave and calculate the appropriate reflection and transmission coefficients.In Section IV we show that the kinks, which we previously believed to exist only in the lossless JTL, exist also in the lossy JTL and show the connection between the shocks and the kinks.We also analytically integrate the wave equation describing weak shocks in the lossy JTL for the specific value of the losses parameter.We conclude in Section V.In the Appendix A we present a physically appealing model of the JTL, composed of superconducting grains.In the Appendix B we explain the condition for the applicability of the continuum approximation used in the paper.son junctions (JJs) capacitors and resistors is shown on Fig. 1. (Possible physical realization of the model is presented in the Appendix A.) We take as the dynamical variables the phase differences (which we for brevity will call just phases) φ n across the JJs and the voltages v n of the ground capacitors.The circuit equations are ℏ 2e where C is the capacitance, I c is the critical current of the JJ, and C J and R J are the capacitor and the ohmic resistor shunting the JJ.In the continuum approximation we treat n as the continuous variable Z and approximate the finite differences in the r.h.s. of the equations by the first derivatives with respect to Z, after which the equations take the form where we introduced the dimensionless time τ = t/ √ L J C and the dimensionless voltage V = v/(Z J I c ); L J ≡ arXiv:2307.15078v1[cond-mat.supr-con]18 Jul 2023 ℏ/(2eI c ) is the "inductance" of the JJ and Z J ≡ L J /C is the "characteristic impedance" of the JTL.The condition for the applicability of the continuum approximation is formulated explicitly in the Appendix B. III. THE SOUND SCATTERING BY THE SHOCK WAVE A. The sound waves and the shock waves Equation (2b) being nonlinear, the system (2a), (2b) has a lot of different types of solutions.In this Section we'll be interested in only two types of those.First type -small amplitude small wave vector harmonic waves on a homogeneous background φ 0 .For such waves Eq. ( 2b) is simplified to We ignored the shunting terms in r.h.s. of (2a) because they contain higher order derivatives in comparison with the main term, and small wave vector means also small frequency. The harmonic wave solutions of Eqs.(2a), (2b) (which, for brevity we'll call the sound) are where is the normalized sound velocity.In this paper the normalized velocity ≡ physical velocity times √ L J C/Λ, where Λ is the JTL period.Note that the stability of a homogeneous background φ 0 demands cos φ 0 > 0. ( The second type of solutions we'll be (mostly) interested in, is shock waves 9,10 .In this Section we'll ignore the structure of the shock wave and consider it as the discontinuity of the dynamical variables.The property of the shocks, which will be proven in the next Section, connects the discontinuities of φ and V with the shock velocity: where φ 1 and V 1 are the phase and the voltage before the shock, φ 2 and V 2 -after the shock, and U is the normalized shock wave velocity.Note also the obvious result of (7a), (7b): B. The reflection and the transmission coefficients In this Section we'll be interested in two problems 11 .The first one: A sound wave is incident from the rear on a shock wave.Determine the sound reflection coefficient.The situation is shown in Fig. 2 we took into account the equation, which will be derived in Section IV where φ b and φ a are the phases before and after the shock in the absence of the sound respectively.Also, For the first problem mentioned above we have where (in) stands for the incident sound wave and (r) for the reflected sound wave.Substituting ( 10) -(11d) into (7a), (7b) in the first order approximation we obtain Taking into account the relations (the difference in the signs is because of the opposite directions of propagation of the two waves) and excluding δU we obtain where u in = u (φ a ) − U is the velocity of the incident sound wave relative to the shock wave, and u r = u (φ a ) + U is the velocity of the reflected sound wave relative to the shock wave.As one could have expected, the modulus of the sound reflection coefficient is less than one, and it goes to zero when the intensity of the shock wave decreases, that is when φ a → φ b , in other words, when the shock wave itself nearly becomes the sound wave.Now let us turn to the second problem.We have where (t) stands for the transmitted wave.Substituting ( 10), (15a) -(15d) into (7a), (7b), in the first order approximation we obtain Taking into account the relations and excluding δU we obtain where u in = u (φ b ) + U is the velocity of the incident sound wave relative to the shock wave, and U is the velocity of the transmitted sound wave relative to the shock wave.As one could have expected, the sound transmission coefficient is less than one, and goes to one when the intensity of the shock wave decreases, that is when φ a → φ b .Looking back at the derivation of ( 14) and ( 18) we understand that the equations will be valid also for a generalized Josephson law for the supercurrent I s : where f is a (nearly) arbitrary function.The difference from the case considered above is that the sound velocity in the general case is and the shock velocity is given by the equation The validity of (21) will become obvious after we present the proof of its particular case (8) in the next Section. A. The travelling waves In this Section we would like to study the structure of the shock wave, so we return to Eqs. (2a), (2b) in their full glory.For the travelling waves, where U is the travelling wave velocity.Making the ansatz we obtain Consider a solution which for τ ∈ (−∞, +∞) stays in the finite region of (φ, V ) phase space.The limit cycles are excluded for our problem, and strange attractors are excluded in a 2d phase space in general 13 .Hence the trajectory begins in a fixed point and ends in a fixed point Integrating (23a), (23b) with respect to τ from −∞ to +∞ and taking into account the boundary conditions, we obtain Eqs.(7a), (7b), which are the basis of our consideration in the previous Section.Note that the shunting of the JJ doesn't influence the shock velocity 9,10 . Excluding V from (23a), (23b) and integrating the resulting equation we obtain closed equation for φ where τ ≡ τ C/C J = t/ √ L J C J , γ ≡ L J /C J R J and F is the constant of integration.Taking into account the boundary conditions (24a), we can write down (25) as where which reminds the equation describing current biased JJ within the RCSJ model 12 . The potential (30) should have maximum at φ 1 , while the point φ 2 should be a stationary point of the potential with the property from which follows −φ 1 < φ 2 < φ 1 .The point φ 2 can be either a minimum or a maximum.The boundary between these two cases (when φ 2 is an inflexion point) we can find by equating the second derivative of the potential at the point φ 2 to zero The approximate solution of (32) is φ 2 = −φ 1 /2.What was said above can be reformulated in a slightly different way.Because the physics is obviously symmetric with respect to simultaneous inversion of all phases φ → −φ, in the following we consider only φ 1 ∈ (0, π/2).If φ 2 is positive, it is inevitably the point of a minimum of the potential.In fact, the stationary points of the potential are given by the equation Because sin φ is concave downward for 0 < φ < π/2, the straight line, crossing the sine curve at the points π/2 > φ 1 , φ 2 > 0 can't cross the curve in between.Hence there are no stationary points between φ 1 and φ 2 .The potential Π(φ) for positive φ 2 is illustrated in Fig. 4. On the other hand, for φ 2 < 0 the potential Π(φ) can have either a minimum or a maximum at φ 2 , as it is illustrated in Fig. 5. Looking at Fig. 5 (left) we realize that for the solution with φ 1 and φ 2 having opposite signs to exist, the effective friction coefficient γ should be large enough to prevent escape of the particle above the potential barrier to the left of φ 2 .(There is no such restriction for the shock wave with φ 1 and φ 2 having the same sign, because in this case the left potential barrier is higher than the right one, as it is illustrated in Fig. 4.) The minimum of the potential at φ 2 situation corresponds to the shock wave and was discussed at length in our previous publications 9,10 .Equation ( 29) can be easily integrated numerically.The result of such integration is presented in Fig. 6.The maximum situation we considered previously only for the particular case of the JTL in the absence of shunting 10 .We called such travelling waves the kinks.Now we understand that similar kinks exist also in the lossy JTL (for −φ 1 < φ 2 < φ 1 /2).Looking at Fig. 5 (right), presenting the potential for the kink, we realize, that since the particle stops at the unstable equilibrium point, for kink to exist, the fine tuning of γ is necessary.Saying it in different words, for a given φ 1 and given γ, only the kink with the definite value of φ 2 can exist.In particular, in the absence of losses (γ = 0) only the kinks with φ 2 = −φ 1 , are possible 10 . Everywhere above we considered the travelling wave going to the right, but, of course, by interchanging φ 1 and φ 2 we obtain the wave going to the left.So the conditions for the shocks and for the kinks in the whole phase plane of the boundary conditions (φ 1 , φ 2 ) are shown in Fig. 7. Two additional straight lines on this Figure φ 2 = −φ 1 and φ 2 = φ 1 present the kinks and the solitons respectively, which can exist in the bare-bones (unshunted) JTL 10 and propagate in both directions. C. The shocks velocity vs. the kinks velocity Differentiating the r.h.s. of (25) with respect to φ we obtain For the shock φ 1 is the point of a maximum of Π(φ) and φ 2 is the point of a minimum.Hence the second derivative of the potential with respect to φ is negative at φ 1 and positive at φ 2 .Thus The inequalities (35) reflect the well-known in the nonlinear waves theory fact: the shock velocity is smaller The phase plane of the boundary conditions (φ1, φ2).Blue regions correspond to the shock wave moving to the right, green regions -to the left.Yellow regions correspond to the kink moving to the right, red regions -to the left.The thick black line φ2 = −φ1 corresponds to the kink, the thick black line φ2 = φ1 -to the soliton which can exist only in the bare-bones JTL and propagate in both directions. than the sound velocity in the region behind the shock but larger than the sound velocity in the region before the shock 14 . From the inequalities (35) we can prove that a shock can not split into two shocks.Actually we can make even stronger statement: two shocks moving in the same direction will merge.In fact, let there is the first shock φ 2 ← φ 3 and the second shock φ 3 ← φ 1 ahead of it.Because of inequalities (35) the velocity of the first shock is larger, and the velocity of the second shock is smaller that u(φ 3 ).The statement is proved.Note that due to a onedimensional nature of our problem we don't have to consider the corrugation instability of the shock wave [15][16][17][18][19] . For the kink both φ 1 and φ 2 are the points of minima.Hence the second derivative of the potential with respect to φ is positive at both points.Thus The kink is supersonic from the point of view both of the region before and after it. D. Weak shock waves For weak wave, characterized by the condition |φ 1 − φ 2 | ≪ 1, the r.h.s. of (25) can be approximated as where As the result, (25) can be simplified to Let us make the change of independent variable where the parameter β will be chosen later.After the change of variable, (39) takes the form We are looking in this Subsection for analytical solutions of (39).After the change of variable, one such solution 20 (existing for the appropriate relation between α and γ which will be determined immediately) can be found by inspection: Actually, there are similar solutions for other two pares of indices, but in the last moment we have recalled the boundary conditions (24a), which after the change of the independent variable turn into Chosen by us solution, in distinction from other two, satisfies the boundary conditions (43).Substituting (42) into (41) we obtain We realize that equation (44) turns into identity, provided β and γ satisfy the relations Solving (45a), (45b) we obtain So finally, if γ satisfies the condition (46b), the solution of (39) with the boundary conditions (24a) is Equations ( 46b) and (47) are applicable both to the weak shocks and to the weak kinks.In particular, for φ 2 = −φ 1 the equations give γ = 0 and Let us return to Eq. ( 39) and strengthen the assumption which lead to the latter to |φ 1 − φ 2 | ≪ |φ|.In this case the equation can be approximated as where α ′ ≡ sin φ/2, and Eq.(41) takes the form Again a solution can be found by inspection: Substituting ( 51) into (50) we obtain We realize that equation (52) turns into identity, provided β and γ satisfy the relations Solving (53a), (53b) we obtain So finally, if γ satisfies the condition (54b), the solution of (49) with the boundary conditions (24a) is Pay attention that though (49) is an approximation to (39), it can be integrated analytically for totally different value of γ (and hence the analytic solutions (55) and (47) are totally different). V. CONCLUSIONS The interaction of the sound waves with the shock waves is well studied in fluid mechanics.In Section III we considered similar problem for the JTL.The formulas for the reflection coefficient in one case and the transmission coefficient in the other case (Eqs.( 14) and ( 18)) turned out to be very simple and appealing. We established the relation between the shocks existing in the lossy JTL and the kinks, which as we now understand, exist both in the lossy and in the lossless JTL.However the solitons, we studied previously in the lossless JTL, are absent in the lossy JTL. We found the particular cases when nonlinear equation describing weak travelling waves in the lossy JTL can be integrated analytically.A physically appealing model of the JTL composed of superconducting grains is presented in Fig. 8. (For simplicity in this Appendix we ignore the shunting capacitor.)Here, we take as the dynamical variables the We realise that Eqs.(1a), (1b) (in the absence of the shunting capacitor) follows from Eqs. (A1a), (A1b) if we substitute φ n = Φ n−1 − Φ n .Also, if we exclude v n from (A1a), (A1b) we obtain which is the particular case of the Fermi-Pasta-Ulam-Tsingou equation (with losses).It is interesting to compare (A2) with the equation from Ref. 21 , describing the chain of interacting particles with friction where m is the mass and y n are displacements of particles in the chain, U (z) is the potential of the interparticle interaction, and α is the friction coefficient.Comparison shows the substantially different character of the losses in the systems. It is also interesting to compare the JTL with the one-dimensional Josephson-junction array.The equation describing the fluxon dynamics in the array is the discretized version of the perturbed sine-Gordon equation 22 where α is the dissipation coefficient.It is appropriate to compare (A4) with the equation obtained by excluding v n from (1a), (1b) Again, the comparison shows the substantially different character of the losses in the systems.But even in the absence of losses (A5) is different from the sine-Gordon equation.Neither does (A5) in the continuum approximation coincides with the sine-Gordon equation with losses 23 . Appendix B: The continuum approximation Natural question is how good is the continuum approximation used everywhere in this paper?To answer this question let us return to Eqs. (1a), (1b) and exclude v n .We obtain The continuum approximation (in the narrow sense) consists in promoting the discrete variable n to the continuous variable Z and approximating the discrete second order derivatives in the r.h.s. of (B1) by the continuous derivatives: sin φ n+1 − 2 sin φ n + sin φ n+1 = ∂ 2 sin φ ∂Z 2 (B2a) To find the limits of the applicability of this approximation, let us consider continuum approximation in the broad sense and generalize, say, (B2b) to We realize that if shunting is strong, that is either C J /C ≫ 1 or Z J /R J ≫ 1 (the condition implied in this paper), the continuum approximation (in the narrow sense) can be justified when ∆φ ≪ 1, where ∆φ ≡ |φ 1 − φ 2 |.In fact, from (39) follows that in this case the time scale of the solution is proportional to 1/ ∆φ, if γ ≪ 1, and to 1/∆φ, if γ ≫ 1.So the forth order derivative term in (B3) has an additional ∆φ ((∆φ) 2 ) factor with respect to the second order derivative terms, the sixth order derivative term -an additional (∆φ) 2 ((∆φ) 4 ) factor with respect to the second order derivative terms and so on. In our previous publication 10 we considered also the case of zero shunting.In this case, even if ∆φ ≪ 1, the continuum approximated has to be upgraded to the quasi-continuum approximation (B5) Thus we were able to study the kinks (and the solitons) in the absence of shunting. . The second problem:
4,774
2023-07-18T00:00:00.000
[ "Physics" ]
Auction-based competition of hybrid small cells for dropped macrocell users : We propose an auction-based beamforming and user association algorithm for a wireless network consisting of a macrocell and multiple small cell access points (SCAs). The SCAs compete for serving the macrocell base station (MBS) users (MUs). The corresponding user association problem is solved by the proposed bid-wait auction method. The authors considered two scenarios. In the first scenario, the MBS initially admits the largest possible set of MUs that it can serve simultaneously and then auctions off the remaining MUs to the SCAs, who are willing to admit guest users in addition to their commitments to serve their own host users. This problem is solved by the proposed forward bid-wait auction. In the second scenario, the MBS aims to offload as many MUs as possible to the SCAs and then admits the largest possible set of remaining MUs. This is solved by the proposed backward bid-wait auction. The proposed algorithms provide a solution that is very close to the optimum solution obtained by using a centralised global optimisation. Introduction The fifth generation wireless system is anticipated to address the growing demand for spectrum and wireless capacity [1]. Usage of small cell access points (SCAs) in terms of cell densification is expected to increase spectral efficiency as it allows aggressive reuse of frequencies within a macrocell. SCAs can be either operator deployed or user deployed. SCAs could operate in openaccess mode, hybrid mode or closed-group mode [2]. Among these three modes, works in [2] advocate for hybrid mode as it allows shared resources between host users (HUs) and guest users (GUs). The macrocell operator can provide incentives to the SCAs for serving its users [3,4]. Within this context and using the notion of game theory, the wireless system can be categorised into buyers, sellers, goods and auctioneers [5]. Auction is a process of selling or buying goods or services. In an auction, the goods are exchanged between the sellers and the buyers according to the variation of the prices. Hence, pricing is used for coordinating and equilibrating the markets. Related works The benefits of offloading traffic have been extensively studied in [6][7][8][9]. The findings in [7] show that small cells can achieve higher network capacity and energy efficiency. In [8], a small cell activation mechanism for offloading traffic from a macrocell to small cells, while avoiding user quality of service (QoS) degradation, was proposed. The work in [9] considered a centralised energy aware offloading mechanism for cloud-radio access network. In [10], a problem wherein the service providers compete for femtocell under a multi-leader follower game framework was considered. A framework for user association in infrastructurebased wireless network that considered optimal throughput, delay and load equalisation was proposed in [11]. Auction-based algorithms have been proposed in [12][13][14][15][16]. A reverse auction framework based on Vickrey-Clarke-Groves (VCG) mechanism was proposed in [12] for a fair and efficient access permission that maximises the social welfare of the network consisting of one wireless service provider and several femtocell owners. Authors in [17] proposed a mechanism to switch between open and closed modes to maximise their performance. The problem was solved using a game theoretic approach. The authors [4,18,19] proposed distributed algorithms for assigning users to SCAs using auctioning, heuristic beamforming designs, Stackelberg games and evolutionary games. Despite all these auction-based algorithms reported in [20,21], algorithms that considered multiple user access through spatial beamforming and auctioning mechanism have not been reported in the literature, which is the focus of this paper. Contributions Our objective is to develop an auction framework for performing beamforming-based spatial multiplexing, user offloading and user association in a heterogeneous network. This framework enhances the network capacity by utilising transmitting infrastructure. The specific contributions of our work are as follows: • We propose and analyse a novel auction mechanism called the bid-wait auction (BWA) that jointly performs downlink beamformer design and user association. To the best of our knowledge, auction mechanisms in the literature have not considered joint beamformer design and user allocation. • We develop a novel valuation function for bidder that automatically monitors resource budgets for the bidder. • We propose and analyse a novel payment rule that allows BWA to allocate items to bidders with sparse information. We proved the existence of the dominant-strategy equilibrium (DSE). Notations: We use the following notations: We use the upper-case bold face and lower-case bold face letters for matrices and vectors, respectively. The notation ∥ ⋅ ∥ denotes the Euclidean norm. The operators ℜ( ⋅ ) and ℑ( ⋅ ) extract the real and the imaginary parts of their arguments, respectively. The regular and Hermitian transposes are denoted by ( ⋅ ) T and ( ⋅ ) H , respectively. access to the network. The MBS and each SCA have maximum transmission powers of p 0 max and p s max , respectively. All of the users have single antenna at the receiver and have specific QoS requirements. Motivation It is likely that the resources at the SCAs may be under utilised by the HUs. On the other hand, resources at the MBS may be over utilised. To avoid user dropouts, MBS will offload some of its users to SCAs. In the presence of dense deployment of SCAs, there is a high chance that a GU may be in the vicinity of more than one SCAs. This work proposes a mechanism that handles competition among SCAs to serve MUs in return for monetary benefits through user allocation and beamforming. Forward bid-wait auction (FBWA) and backward bid-wait auction (BBWA) algorithms We consider two scenarios: In the first scenario, the MBS admits the maximum possible MUs it can serve and then offloads the dropped MUs to SCAs via auctioning. In the second scenario, the MBS allows the SCAs to bid for serving GUs and then later aims to admit the remaining MUs. We propose the BWA and supplement it with an admission control to develop FBWA and BBWA algorithms. The FBWA and BBWA algorithms solve problems in the first and second scenarios, respectively. System metric design We index the MBS by 0 and the sth SCA by s. Let the set of MUs served by the MBS be ℳ 0 . Each MU is denoted by index m. In the downlink, the transmitted signal for MU m from the MBS is written by ℳ′ 0 ⊆ ℳ 0 as a set of admitted users, whose cardinality is a parameter to be maximised. The user admission problem at the MBS is formulated as where |ℳ′ 0 | denotes the cardinality of the set ℳ′ 0 . We assume that all of the MUs have identical QoS requirements. This latter assumption encourages the SCAs to admit as many GUs as possible, as shown later. The problem in (6) is non-convex due to non-convex objective function. However, QoS constraints can be rewritten in their equivalent second-order cone (SOC) [22] as The objective in (9) is an ℓ 0 -norm, which accounts for the number of non-zero elements in the vector a 0 . This ℓ 0 -norm problem is a combinatorial problem, which is non-convex and non-deterministic polynomial-time (NP) hard. A widely adopted approach in the literature to deal with this form of non-convex problem is to approximate the ℓ 0 -norm with an ℓ 1 -norm [23,24]. Hence, we have replaced the objective function with an ℓ 1 -norm. This together with the SOC constraints makes the overall problem a convex problem, known as SOC programming [22] as follows: subject to constraints in (9) . The above convex problem can be solved using the CVX tool [25], which is able to indicate if the problem is feasible or not. The value of each a m 0 indicates the feasibility gap for the corresponding user and preference of any users by the transmitter. To obtain the optimal admission set ℳ′ 0 , as proved in [23], the elements of a 0 are rearranged in ascending order and the MUs are sequentially admitted starting with those users that have smallest a m 0 . This is done by performing feasibility check at every admission stage by solving If a newly admitted user makes the constraints in (11) infeasible (i.e. when feasibility test fails), then that user will be removed from the set ℳ′ 0 . The resulting admission set ℳ′ 0 will be optimal in sense of maximising admitted users. Bid-wait auction The MBS wishes to offload as many users as possible to SCAs. This is usually formulated as a surplus maximisation in auctioning [26]. Therefore, we use the number of admitted GUs as our performance metric. We form a BWA considering the MBS as the auctioneer, the SCAs as the bidders, and the GUs as the items. Let us denote the beamformer vector at the SCA for serving the ith HU given that GU g is admitted by the SCA as ŵ i . Also, we denote the beamformer vector at the SCA for serving the kth HU before the GU g is admitted as w k . The cost of connecting the gth GU during the rth auction round is given by where μ is the cost per unit power. The first term in (12) is the total transmission power after the admission of the gth GU. The last term is the total transmission power before gth GU is admitted. Each served user pay the SCA an amount of κ per unit of data rate. Since the MBS auctions some of its users to the SCAs, the GUs will pay SCAs that will in turn pay MBS. The difference of the payment is the profit generated by the SCA for serving a GU. We denote the SINR target for the GU as ξ g s . Hence, each GU has a marginal value v sg r given by v sg which is a value contributed by that GU given the already admitted users. These values are private and they are unknown to other bidders and the auctioneer. The marginal value in (13) demonstrates that the GUs are substitutes, i.e. admitting a user by an SCA at a particular stage will change the required beamformer and power allocation of already admitted users as well as the remaining users that SCAs will be bidding. This will change the preference order of items for every SCAs. Therefore, it is critical that an SCA comes up with an effective preference profile. In Section 4, we propose two types of preference profiles. Surplus maximisation in BWA An intuitive approach in surplus maximisation is to allocate items to bidders that value them the most. This allocation rule indirectly allows maximisation of allocated items. The BWA is a collection of concurrent sealed-bid single-item auctions. In the proposed BWA, the objective of the MBS is to assign the GUs to those SCAs that value them the most. Let us define a set s ⊆ to contain all GUs that can be assigned to sth SCA and a competitors' set g which contains all SCAs competing to connect the gth GU. A feasible assignment is the set of SCA-GU pairs (sg), with g ∈ s . An SCA can be part of more than one pair (sg) ∈ . The surplus maximisation problem at the MBS is formulated as the following integer program: where R is the total number of auction rounds, ′ is the set of all possible SCA-GU assignment pairs (sg)( ′ ⊆ ) and (x sg r ) g ∈ s ′ are binary decision variables, indicating association of SCAs. x sg r = 1 means that SCA s is assigned to GU g and otherwise x sg r = 0. Hence, the term ∑ s ∈ v sg r x sg r is the surplus at the rth auction round. We propose to solve (14) by running simultaneous sealed-bid single-item auctions wherein, at each auction round, each bidder's action is a bid b sg r (not necessarily the true value) on the most preferred GU. This accounts for the summation over the total number of auction rounds in the objective. Therefore, BWA mechanism decomposes the combinatorial nature of the problem and runs virtual single-item auctions repetitively. The second and the third constraints ensure that each SCA can be assigned to one or more GUs and each GU can be assigned to only one SCA. Bidders valuation functions If an SCA wins a GU during auction round r, it pays a price p sg r to the MBS. The bidders utility model at rth auction round, on the bid/action profile b r = [b 1g r , …, b Sḡ r ] is a quasilinear utility model define as where the subscript g and ḡ could refer to the same or different GUs. The overall objective of the SCA is to maximise By assuming positive utility at each auction round, and some payment p sg r (b r ) that is independent of v sg r (b r )x sg r (b r ) the utility in (16) is maximised by admitting as many GUs as possible. This is because we assumed that all of the MUs have identical QoS targets. Fixed preference profile (FPP) criterion In this case, we assume that bidders determine their preference profile once, at the beginning of the auction, and fix it for the entire BWA. Each SCA identifies the GUs that fall within its auction coverage area. This is followed by determining the FPP by solving the admission problem. (10) to form an ℓ 1 -norm admission problem for the SCA as where the third constraint ensures that the HUs are given first priority. To build up a preference set of GUs ′ s ⊆ ℱ s , we sort the vector a s in ascending order of its elements. The corresponding indices of the sorted a s with an exclusion of the HUs give the FPP f s . It should be noted that at this stage, no valuation profile that corresponds to f s is determined. Since there is no guarantee that all GUs in the preference set will be won, the values are computed on a 'need-to-know' basis. At every auction round, an SCA will use (13) to place a value on the most preferred GU. Adaptive preference profile (APP) criterion It is anticipated that the level of preference over GUs will be reduced if an admission of a particular GU is already made due the substitute nature of the GUs. Therefore the preference profiles need to be revised every time a new GU is admitted. The values for every GU g ∈ s are computed separately and sorted in descending order to determine the current preference profile. A bid is then placed on a GU that is perceived to have the highest value. Let us define the QoS targets of the HUs and gth GU as Also let the set ℋ s as a set of HUs and admitted GUs. For every available GU g ∈ s , each of the SCA determines the connection cost by solving the following feasibility problem: The connection cost can be determined using (12). With the exception of the first auction round, we note that for every auction round, losers from the previous round do not need to revise their preference profiles. The bidders on WAIT (i.e. bidders are on WAIT if the decision on their bid is withheld) do nothing while the winners are required to revise their preference profiles and submit new bids. The losers from the previous round only need to submit the bid on the next most preferred and available GUs since the values are already known. When the preference profile needs revision, all the values of the available GUs need to be calculated. Though it may appear, it offers the SCAs with the capability to identify and prune away all the GUs that will never be feasible for admission. This is not applicable when FPP criterion is used. In FPP criterion, only the value of the next preferred and available GU is determined at every SCA in the contact_list (i.e. a list of SCAs that are eligible to submit new bids). For any non-conflicting preference profile, the MBS will permit the corresponding SCA to submit bundle bids on the largest set of the remaining GUs that it can admit simultaneously. If any SCA has knowledge that some GUs are not bid by all of the remaining SCAs, there is a possibility for unfaithful bidding. However, as it is difficult for any SCA to acquire preference profiles of other SCAs, we exclude this possibility in our work. BWA mechanism design We propose a BWA auction which inherits some properties of the second-price auction proposed in [27]. To reduce the amount of information shared between the MBS and the SCAs, the BWA uses iterative indirect mechanism to gather useful information from SCAs. It is assumed that the MBS has knowledge of the locations of all of the bidders and the GUs. Therefore it can formulate the preference sets of all of the SCAs. The MBS sets a rule that each bidder should submit one bid at a time. The bids should be monotonically decreasing in each auction round. Even though the BWA uses some of the principles from the VCG mechanism, we emphasise that the two methods are totally different. To highlight this difference, we present the following example with the aid of and {GU2, GU3, GU4, GU5}, respectively. The BWA will iterate as shown in Fig. 2. Note that unlike the VCG which charges the winner the second highest bid on the winning item, the BWA charges the winner the second highest price from the competitor's set. The set GU1 := {SCA1, SCA2, SCA3, SCA4} is the competitors set for GU1. Therefore in the first auction round the BWA allocates GU1 to SCA1 and charges it 7 from bidder SCA2. SCA2 and SCA4 are then put on WAIT while SCA1 and SCA3 are put in the contact_list making them the only two bidders who are allowed to submit new bids on the second round. The same process is repeated until the contact_list becomes empty. The BBWA and the FBWA algorithms are summarised in Algorithms 1 (Fig. 3) and 2 utilise this BWA in their main loops. We now develop a dominant-strategy incentive compatible (DSIC) mechanism for the BWA and prove that the BWA has a unique sDSE at each auction round and a unique DSEfor the entire BWA. Since the BWA is a collection of concurrent sealed-bid single-item auctions, we can confine our problem into a singleparameter environment [26] for mechanism design. The outcome of such mechanism is the allocation and payment vectors x r = [x 1, g r , …, x S, ḡ r ] and p r = [p 1, g r , …, p S, ḡ r ]. Allocation rule If the bids from a particular bidder are not monotonically decreasing, its current bid will not be accepted and the bidder is dropped from the auction. In every auction round, the BWA allocates the GU to the bidder with the highest bid if the feasible assignment set has the minimum required information using the allocation rule Proposition 1: Assume the auctioneer has the preference sets of all bidders s , ∀s ∈ . Suppose bidders j and k are the only bidders who are eligible to bid on item m. If during the rth auction round, item m is bidder j's first preference with a bid of b jm r and the current bid from k's bidder is b kp r on item p (i.e. item p is more preferred than item m from bidder k's perspective), then the following conditions exist: i. If b jm r > b kp r , it suggests that b jm r > b km r , concluding that bidder k stands no chance in winning item m. The item is then assigned to bidder j. Under this condition, the auctioneer has complete bid information on item m. We henceforth refer to bid b kp r as bidder j's critical bid. ii. If b jm r < b kp r , then bidder k still stands a chance to win item m. Therefore bidder j will have to WAIT (hence the term BID-WAIT) until the auctioneer has the right information to announce the winner between bidders j and k. Under this condition, the auctioneer has incomplete bid information on item m. Proof: Since the auctioneer has access to the preference sets and uses the one bid at a time rule, and by assuming truthful bidding, the preference profiles at the SCAs dictates that the bids submitted should be monotonically decreasing at each auction round. Therefore, the next bid on the next available preferred item is always less or equal to the current submitted bid. □ SCA-GU admission: BWA 1 Perform steps 3-12 of Algorithm 1 (Fig. 3). 2 Set ℳ 0 = ℳ 0 ∖ ′. MBS-MU admission 3 Solve to (6) and (10) to get ℳ′ 0 . Payment rule The BWA extends the second-price rule by charging the winner the second highest bid from the bidder in competitors' set g , i.e. the critical bid. It is very important to note that the critical bid needs not to be the second highest bid on a particular item as elaborated in Proposition 1. On the other hand if v sg r ≥ B, maximum utility that bidders s will obtain is max {0, v sg r − B} = v sg r − B, which occurs by bidding truthfully and winning. □ Theorem 2: Bidding on the most preferred GU is a dominant strategy in the bid-wait auction. (see (20) and (21)) (see (21)) Proof: Without loss of generality, consider two items with identities g and ḡ. Fix an arbitrary bidder s with the preference profile f s = [g, ḡ] at the rth auction round. Set its valuations profile where v sg r > v sḡ r , and denote the bids from other bidders as b −s r , b −s r + 1 during the auction rounds r and r + 1, respectively. Again without loss of generality, let us assume that all other bidders have the same preference profiles as bidder s at the rth auction round. Let B r = max t ≠ s v tg r and B r + 1 = max z ≠ s v zḡ r + 1 denote the critical bids for bidder s during auction the rounds r and r + 1, respectively. The critical bids B r and B r + 1 should satisfy B r > B r + 1 . If during rth auction round bidder s bids b sḡ r on GU ḡ, its potential utility is u s = u sḡ r + u sg r + 1 . In this case, only three distinct outcomes as described in (20) exists. In (20d), ϵ sg|ḡ r + 1 > 0 implies a decrease in valuation on GU g during auction round r + 1 given that GU ḡ is already admitted. In (20a), bidder s is put on WAIT during auction round r and he loses GU g. In the auction round r + 1, he also loses GU ḡ. In (20b), bidder s is put on WAIT during auction round r and he loses GU g but during auction round r + 1, he wins GU ḡ. In (20c), bidder s wins GU ḡ during auction round r and other bidders are put on WAIT. In the auction round r + 1, only bidder s is allowed to submit a new bid b sg r + 1 < b sḡ r . Still under (20c), if the new bid b sg r + 1 < B r , then he loses GU g. In (20d), bidder s wins both the GUs. On the contrary, suppose bidder s places his order of preference truthfully by bidding on item g in the rth auction round, and ḡ in the (r + 1)th auction round. The potential utility the bidder s will preferred. The BBWA with FPP criterion admits more users when compared with the SAA algorithm. As seen in Fig. 4b, in terms of revenue generation, the SCAs would prefer the BBWA with the FPP criterion. However, as the primary intention of the MBS is to minimise the dropped users, it will also prefer BBWA algorithm. In comparison to SAA algorithm, BBWA with FPP criterion generates more revenue at lower targets rates, while the SAA algorithm generates more revenue at higher targets rates. This is due to the following reason. The competition is very strong among the bidders at lower target rates than at higher target rates. In SAA algorithm, bidders (SCAs) pay the bid they submitted rather than the second highest bid from the set of competitors. Hence when there is high competition among SCAs, the price being paid for the GUs in the SAA is higher than that being paid in the BBWA with FPP criterion. In contrast, when the competition is low, the bidders will pay less under SAA than that under BBWA with FPP criterion. We also compared the average system overheads measured in terms of the number of invitations for bidding, the number of bids submitted and the number of announcements made. As seen in Fig. 4c, the system overhead drops with increasing target data rate. This is because with increasing target data rate, the SCAs will reach its admission capacity quickly and there is no need for further auctioning. The average number of auction rounds is also compared in Fig. 4d. For the same reason, the number of auction rounds drops with the increasing data rate. To reduce the system overheads in SAA, the price increment step was set as δ = 0.001 × (target data rate)/(0.5) (bits/s/Hz). Regardless of this price increment adaptation, it is observed in Figs. 4c and d that the proposed algorithms outperform the SAA algorithm in terms of both the system overheads and the auction rounds. In Fig. 5, we compared the performance of the BWA to a centralised solution proposed in [31]. We considered six MUs and two SCAs. As seen in Fig. 5a, as the target data rate of the MUs is increased, the total transmission power increases exponentially. Starting from target data rate of 10.5 bits/s/Hz, the average number of admitted users at SCA 1 drops from 3 to 2.75. Consequently, the total transmission power is also dropped. A similar trend is observed in Fig. 5b for the BWA. By comparing Figs. 5a and b, it is observed that the BWA is close to optimal. Conclusion We have proposed a framework that performs user association and beamforming in a wireless downlink heterogeneous network through auctioning. We considered two scenarios. In the first scenario, the MBS admits as many users as it can serve and then auctions off the remaining users to SCAs. This is solved using the FBWA algorithm. In the second scenario, the MBS auctions off as many users as possible to the SCAs and then admits a largest possible set of users from the remaining users. This is solved using the BBWA algorithm. The results show that the BBWA with FPP criterion is preferred by the MBS as well as the SCAs. The proposed algorithm is able to provide closer to optimal solution with significant saving in complexity.
6,420.4
2017-03-21T00:00:00.000
[ "Computer Science" ]
Blood culture collection technique and pneumococcal surveillance in Malawi during the four year period 2003–2006: an observational study Background Blood culture surveillance will be used for assessing the public health effectiveness of pneumococcal conjugate vaccines in Africa. Between 2003 and 2006 we assessed blood culture outcome and performance in adult patients in the central public hospital in Blantyre, Malawi, before and after the introduction of a dedicated nurse led blood culture team. Methods A prospective observational study. Results Following the introduction of a specialised blood culture team in 2005, the proportion of contaminated cultures decreased (19.6% in 2003 to 5.0% in 2006), blood volume cultured increased and pneumococcal recovery increased significantly from 2.8% of all blood cultures to 6.1%. With each extra 1 ml of blood cultured the odds of recovering a pneumococcus increased by 18%. Conclusion Standardisation and assessment of blood culture performance (blood volume and contamination rate) should be incorporated into pneumococcal disease surveillance activities where routine blood culture practice is constrained by limited resources. Background Blood cultures are an essential component of good clinical care in the diagnosis and management of blood stream infections (BSI) which are frequent in hospitalised patients in Malawi and the rest of Africa [1][2][3][4][5][6]. Information from blood culture surveillance is also an important tool for establishing public health priorities, assessing the impact of interventions -particularly vaccines -and for providing information on antimicrobial resistance patterns to help formulate prescribing guidelines for empiri-cal therapy. Blood culture surveillance has been the key tool used in the USA for recognising the enormous potential of childhood pneumococcal conjugate vaccination for decreasing disease in adults [7]. In Malawi as in much of the rest of sub-Saharan Africa, blood culture facilities are available in only a limited number of centres. As a consequence single reports are often extrapolated as representative of disproportionately large regions and time periods, and inaccuracies in these reports are more likely to be perpetuated as a result of the lack of alternative data. Blood cultures need to be specific and sensitive and therefore as representative as possible of the true BSI disease burden and particularly so where BSI surveillance is restricted to very few sites. Isolation of bacteria from blood is usually taken as definitive proof of disease aetiology; this is particularly true for pneumococcal pneumonia where blood culture remains the gold-standard for diagnosis. However blood cultures lack sensitivity, which may vary as a result of both laboratory and clinical factors. Laboratory factors include choice of culture media, speed of incubation and duration of incubation. Among clinical factors recent receipt of antibiotics and poor collection technique leading to contamination decreases sensitivity, while culturing larger volumes of blood generally increases the yield of positive results [8][9][10][11][12][13]. Investigation of the relationship of blood volume to recovery has tended to concentrate on large volumes of blood in excess of 20 mls and not on the relatively smaller volumes most typically taken during routine clinical practice in Africa. The blood culturing service at the Queen Elizabeth Central Hospital (QECH) in Blantyre, Malawi is the only large scale, unselected and continuous assessment of BSI in Malawi and thus provides information of national importance to health planners and researchers. Samples for blood culture have traditionally been taken by nurses, medical and paramedical staff in attendance on the wards of the QECH. In March 2003 an audit of blood culture performance was carried out in the medical wards, revealing poor collection technique and inadequate volumes of collected blood. This information was fed back to staff starting in January 2004 with regular updates throughout that year but with little overall change in the outcome (contamination rate and isolation rate) of blood cultures. In January 2005 a dedicated team of nurses was established to perform all the blood cultures in adult patients in the QECH. This was a direct consequence of the failure to measure an improvement in blood culture performance through feedback to the general clinical staff. We report the changes in blood culture results following the introduction of this team and relate bacterial isolation to blood volume collected. Methods All of the work described has been carried out in the adult medical wards of the Queen Elizabeth Central hospital (QECH) between January 2003 and December 2006. The QECH is the largest public hospital in Malawi and provides care for the more than one million residents in Blantyre district and takes referrals from the southern region of the country. The medical department has ~250 beds occu-pied at any given time and admits ~10,000 patients annually. In February-April 2003 an audit of blood culture technique and volume was undertaken. In January 2004 the results of this audit were fed back to staff involved in blood culture collection. Throughout 2004 there were regular updates of blood culture performance provided at least monthly at medical department meetings and including staff responsible for taking blood for culture. When new staff arrived in the department, demonstrations were given at three monthly intervals by study team members on proper blood collection technique. In January 2005 a dedicated team of nurses was introduced with responsibility to perform all blood cultures. Nurses were chosen as no cadre of staff exist in Malawi with a license to perform phlebotomy. These nurses were given appropriate training including laboratory visits and are supervised by a senior nurse manager. Whilst they also undertake general nursing duties their work priority is the performance of blood cultures. They take responsibility for the management of physical resources required for phlebotomy and liaison with the laboratory. If they encounter a problem they have direct access to the nurse manager. The results of blood cultures are returned to patients and their medical team through the blood culture nurses. Targets for contamination rates are set at 5%. Blood cultures are requested by the admitting medical officer. The majority of these staff change at regular intervals after a few months in post but are encouraged to follow departmental guidelines for ordering of blood cultures. These include axillary temperature > 37.5°C or < 35.0°C, clinical syndrome of pneumonia, sepsis or meningitis, any altered level of consciousness or any life threatening illness as judged by the admitting officer. These policies and guidelines for the selection of patients for blood culture did not change throughout the period of the present study. All departmental staff receive a small printed copy of departmental protocols to be carried on their person for reference and undergo a period of training at the commencement of their attachment. There is no regular formal audit of compliance with protocol. Blood culture analysis is performed using an automated system (BacTalert ® , Bio-Merieux). Blood is removed from the patient and transferred to a single aerobic blood culture bottle containing the manufacturer's standard culture medium. Bottles are either placed in a ward-based incubator or transferred immediately to the microbiology laboratory for processing, depending on the time of collection. A count of blood culture bottles provided by the lab is kept and compared to blood culture bottles received back from the wards along with reports of bottle spoilage. This process ensures all blood cultures taken are analysed. Processing of samples follows standard operating procedures consistent with accepted laboratory practice. These protocols have been unaltered over the four years of the surveillance period. Isolation of Diphtheroids, coagulase negative Staphylococci, Micrococci spp or Bacillus spp other than anthracis is recorded as contaminants. All other isolates are treated as significant. Collected blood volume was measured during Feb/March, 2003 and Feb/March 2005. All blood cultures arriving in the laboratory on Mondays and Thursdays were subjected to assessment. Weight was measured by subtracting the mean unfilled and uncapped blood culture bottle weight from the weight of bottles delivered to the laboratory. The mean weight of an unused bottle was estimated by sequential measurements on 20 unused bottles. Weights of unused bottles were highly consistent within each period of assessment: in 2003 mean weight was 93.7 g, (SD 0.15) and in 2005 it was 68.4 g, (SD 0.16) (blood culture bottle structure was changed during the intervening period by the manufacturer). The staff in both the hospital and laboratory were unaware that an assessment of blood culture volumes was underway. In 2003 we assessed 500 bottles; in 2005 we measured 250 bottles. This would provide sufficient power to demonstrate a significant increase in blood volume above a geometric mean of 5 mls. Results of blood cultures are recorded in a Microsoft Access ® database. Statistical analysis was performed using STATA8 software (College Station, Texas). Comparison of blood volumes was performed using the unpaired Student's t-test, the relationship between blood volume col-lected and isolation rate was performed using logistic regression analysis. Temporal changes in isolate recovery were evaluated by comparing recovery rates in the preblood culture nurse service years 2003/4 to the post blood culture service years 2005/6 using the chi 2 test. No ethical approval was sought for this study as it used routinely collected data. Analysis was undertaken on an anonymised database. Results There was no significant change between 2003 and 2004 following the introduction of performance feedback to staff (Table 1). Following the introduction of the blood culture team in 2005, there was a significant decline in the rate of retrieval of contaminants from blood cultures, from 19.6% to 5.0% ( Table 1). The quantity of blood collected for culture also significantly increased during this period, from a median of 4.6 to 9.7 mls per bottle (Table 1). Non-typhoidal Salmonella and Streptococcus pneumonia were the most frequently isolated organisms during all three periods and made up 76.2, 68.2, 74.0 and 67.1% of all significant isolates during the four study years respectively. The other major isolates are detailed in Table 1. The increased recovery of S. pneumoniae between 2003/4 and 2005/6 was highly significant with a 118% increase in isolation. There was also a significant 25% decrease in the recovery of non typhoidal Salmonella during these two periods (p < 0.001), (Table 1). Amongst other isolates there was a significant increase in recovery of E. coli, Cryptococcus neoformans and Salmonella typhi. Geometric mean blood volumes were highest in blood cultures from which S. pneumoniae and C. neoformans were recovered (Table 2). There was also a significant increasing chance of recovering S. pneumoniae with larger blood volumes cultured. Over the range of blood volumes collected in this study, for every extra 1 ml of blood the odds of recovering S. pneumoniae increased by 18%. This trend was not seen for non typhoidal Salmonella. When adjusted for year of recovery this finding persisted. There was a trend for decreasing recovery of both other Gramnegative and Gram-positive isolates with increased blood volume. However the majority of these isolates were recorded during the 2003 assessment period (35 vs 2) which limits the validity of this comparison. Discussion The QECH is an important tertiary referral centre in Malawi providing a blood culture service for patient care. The results of blood cultures from this institution have been reported in the past [6,14] and have helped determine clinical and research priorities and antibiotic prescribing practice. Inadequacies of technique and procedure in the sampling of blood for culture were corrected during 2005 by the introduction of a team of blood culture nurses. As a result of this we measured an increased yield of blood culture isolates, an effect that was most dramatic with the isolation of pneumococci. Our findings suggest that surveillance for invasive pneumococcal disease in our region should pay close attention to the quality and quantity of blood collection at the bed-side. The potential for selection bias in this study was low. Blood cultures were ordered as a part of standard departmental clinical care and unconnected with any of the investigative team. Departmental indications for blood cultures remained unchanged over the course of the study. The proportion of all adult admissions sampled each year remained relatively constant around 60% (data not shown) suggesting that there was no fundamental change in the individuals sampled that might bias sampling towards respiratory illness and greater recovery of pneumococcus. Over the four year period, data from active surveillance on clinical presentation to hospital indicate no secular trends in illness presentation in this population. Inoculated blood volumes were assessed on a representative sample of blood cultures. Although the sample selection was not random the sampling method was unlikely to lead to any systematic bias in isolate recovery or volume between the two assessment periods: clinical staff were unaware of the blood culture volume assessment; samples were taken from acute admissions; and as noted above, there was no evidence of a major change in pattern of clinical presentation during this period. Laboratory practice was consistent throughout the duration of the study. The principal weakness of the study is the historical nature of the comparison. We are unable to fully exclude the possibility of temporal shifts in the pattern of pneumococcal disease and other significant isolates, or of changes in the pattern of health seeking behaviour and population. Pneumococcal disease does show seasonal variation [3,15]. By assessing blood volumes over similar seasons in the two assessment years, we have minimised the possible confounding effect of seasonal changes on the relationship between sample volume and retrieval of S pneumoniae. Longer term changes i.e. the possibility of a steadily increasing burden of pneumococcal infection in the community, are less easy to control for. Accessibility to antiretrovirals (ART) for the HIV-infected and improvements in health care access may have altered health seeking patterns. There is no information from this region to know if this is the case, although it is unlikely that this would easily explain the rise in pneumococcal isolation indeed access to ART might be expected to reduce disease burden [16] and the number of health service facilities in the region has remained the same over the period of the study. Population size and structure within the hospital catchment area is also subject to demographic change. For clarity, Gram negative and Gram positive bacteria other than S. pneumoniae and the non-typhi Salmonella have been grouped together. Odds Ratios are presented unadjusted (OR) and adjusted (OR a ) for year of sampling and represent the chances of recovering an isolate with each extra ml of blood over the range of volumes taken in this study 1-13 mls. However during the period of the study there has been no major population shifts with annual growth estimated at 2.3% [17]. Urbanisation in Malawi is relatively slow compared to other African countries and the prevalence of adult HIV in this region has remained stable at around 12% which would suggest there has been no large influx of susceptible individuals as an explanation for the increased numbers of cases. There is no evidence to support a fundamental change in the behaviour and biology of S. pneumoniae. Serotype distribution has remained relatively constant over several years [18] with an absence of any large epidemics of serotypes 1,3,5 or 12 which have been associated with large outbreaks in sub-Saharan Africa and elsewhere. Furthermore the demonstrated association between recovery, blood volume and reduced contamination provides support to the view that increased recovery is a function of sampling and not an epidemic phenomenon. The decrease in recovery of non typhoidal Salmonella is less easy to explain. Salmonellae are potential skin contaminants in our setting as well as invasive pathogens in their own right. Reduced skin contamination through improved skin cleansing is probably the explanation for the reduced recovery of Staphylococcus aureus and may also be the case for our reduced recovery of other Gram negatives. However temporal shifts in disease burden need to be considered [19]. In the two years preceding this report (2001 and 2002) recovery of non-typhoidal Salmonellae and S. pneumoniae were recorded at a similar rate to that recorded in 2003 (Salmonellae 8.0% and 10.1%, S. pneumoniae 2.2% and 2.5% respectively, unpublished data, blood culture analysis using a manual method), which suggests that temporal changes take place gradually over long periods of time and that the findings reported in 2003 and 2004 are not unique in terms of recovery rates of significant isolates. Increasing use of anti-retroviral drugs during 2005 and 2006 may have reduced the incidence of NTS bacteraemia in the adult population in Blantyre. Longer term surveillance incorporating other centres will be important to understand the evolving epidemiology of BSI preferably nested in settings subjected to demographic surveillance. The association between increased blood volume and improved recovery of pneumococcus is not in itself surprising. Past studies and accepted knowledge support the view that greater volumes of blood improve the recovery of microorganisms [8][9][10][11][12][13]20], although there are exceptions to this [21]. Similarly it is not surprising that there should be a different relationship between volume and isolation rate for different bacteria. The quantity of bacteria in a given quantity of blood, the viability of these bacteria and their robustness during venesection, inoculation, transportation, incubation and analysis will all play a part. The failure to see increased recovery of nontyphoidal salmonella with larger blood culture volumes may reflect the ease of recovery of this organism from individuals with advanced HIV, who are unable to control the bacteraemia [22]. What is surprising from this study is the extent of the change in recovery of pneumococci in relation to the other principal blood culture isolates over the range of blood volumes suitable for inoculation into a single blood culture bottle. Single bottle inoculation has been adopted as a routine as it minimises costs and also fits with local cultural concerns over removal of blood. This will be typical of many resource-limited settings. Our findings are very different from a previous report in the USA that recorded no change in the recovery of Gram positive organisms using the BacTalert ® system comparing 5 and 10 ml inoculums [23]. This suggests a fundamentally different host response to bacteraemia in our patient population which alters bacterial recovery and the high prevalence of HIV co-infection in over 70% of adult inpatients is likely to be the major determining factor. Pneumococcal bacteraemia is known to be more common during respiratory infection in HIV-infected adults compared to the HIV-uninfected [24,25]. A mechanism to explain a differing volume/recovery relationship in the presence of HIV is however not currently known. Conclusion These findings suggest we should be cautious about the way we interpret and use data from routine blood culture services where there are severe resource limitations. Uncritical reporting may falsely skew and under-report blood stream infections, especially S. pneumoniae bacteraemia. There are rapidly evolving initiatives to introduce pneumococcal conjugate vaccines into the vaccination schemes in developing nations. An important component of this introduction will be the availability of reliable pneumococcal disease surveillance for the monitoring of both direct and indirect benefits, and serotype changes induced by the vaccine. These findings suggest that to achieve accurate results staff should be educated but most importantly monitored and encouraged through an appropriate line of management. The failure to see improvements in blood culture performance during 2004 we believe reflects the low priority given by staff to the performance of blood cultures, the difficulties in finding resources and lack of encouragement and feedback to first-contact clinical staff from their immediate line managers. By having a small team of nurses, closely monitored, in daily communication with the laboratory and nurse manager and working towards targets, performance was improved. The resources to undertake work were also more easily controlled and always available. We conclude by suggesting that a minimum set of standards required for adequate surveillance should be established. These should incorporate an audit of blood culture quality and blood volume collection, and a clear line of responsibility for supervising blood culture collection suitable for local circumstances in order to enhance the accuracy and reliability of collected data.
4,596.2
2008-10-14T00:00:00.000
[ "Medicine", "Biology" ]
Glaucoma through Animal’s Eyes: Insights from the Evolution of Intraocular Pressure in Mammals and Birds Simple Summary Understanding how a disease evolved across the animal kingdom could help us better understand the disease and might lead to novel methods for treatment. Here, we studied the evolution of glaucoma, an irreversible eye disease, in mammals and birds, by studying the evolution of intraocular pressure (IOP), a central driver of glaucoma, and searching for associations between life history traits and IOP. Our results revealed that IOP is a taxa-specific trait that is higher in some species than in others. Higher IOPs appear to have evolved multiple times in mammals and birds. Higher IOPs were found in mammals with higher body mass and in aquatic birds. We also found that higher IOPs evolved through stabilizing selection, with the optimum IOP in mammals and birds being 17.67 and 14.31 mmHg, respectively. This supports the hypothesis that higher IOPs may be an adaptive trait for certain animals. Focusing on species with higher IOPs but no evidence of glaucoma may help identify glaucoma-resistant adaptations, which could be developed into human therapies. Abstract Glaucoma, an eye disorder caused by elevated intraocular pressure (IOP), is the leading cause of irreversible blindness in humans. Understanding how IOP levels have evolved across animal species could shed light on the nature of human vulnerability to glaucoma. Here, we studied the evolution of IOP in mammals and birds and explored its life history correlates. We conducted a systematic review, to create a dataset of species-specific IOP levels and reconstructed the ancestral states of IOP using three models of evolution (Brownian, Early burst, and Ornstein–Uhlenbeck (OU)) to understand the evolution of glaucoma. Furthermore, we tested the association between life history traits (e.g., body mass, blood pressure, diet, longevity, and habitat) and IOP using phylogenetic generalized least squares (PGLS). IOP in mammals and birds evolved under the OU model, suggesting stabilizing selection toward an optimal value. Larger mammals had higher IOPs and aquatic birds had higher IOPs; no other measured life history traits, the type of tonometer used, or whether the animal was sedated when measuring IOP explained the significant variation in IOP in this dataset. Elevated IOP, which could result from physiological and anatomical processes, evolved multiple times in mammals and birds. However, we do not understand how species with high IOP avoid glaucoma. While we found very few associations between life history traits and IOP, we suggest that more detailed studies may help identify mechanisms by which IOP is decoupled from glaucoma. Importantly, species with higher IOPs (cetaceans, pinnipeds, and rhinoceros) could be good model systems for studying glaucoma-resistant adaptations. Introduction Glaucoma, an eye disease associated with elevated intraocular pressure (IOP), is a leading cause of irreversible visual loss in humans. Elevated IOP is a central driver of the degeneration of optic nerves that results in gradual visual impairment. To date, no treatment can completely reverse the damage to optic nerves, making glaucoma the second leading cause of blindness and the first leading cause of irreversible blindness in humans after cataracts [1,2]. While it is a common visual pathology in our species, glaucoma is not uniquely human. Many animals, especially domestic pets (e.g., cats and dogs) and livestock (e.g., cattle), have been reported to have glaucoma [3,4]. However, because of the limitations of technology and the feasibility of diagnosing glaucoma in different species of animals, the prevalence of this disease in other species of animals, especially wildlife, is still undocumented [5]. Understanding how glaucoma evolved across the animal kingdom might shed light on susceptibility and adaptation to the disease [6,7]. To investigate the presence of glaucoma in animals, we must first find ways to identify the pathology in nonhuman animals. In humans, diagnosing glaucoma requires measuring IOP, conducting a visual field test, and conducting an optical coherence tomography (OCT) test [8]. However, conducting a similar set of assessments among a wide range of nonhuman animals in varied settings is logistically impossible [5]. For captive animals, veterinarians use a tonometer during a screening test to measure IOP [9]. This helps determine the risk of having glaucoma. Nevertheless, because of limited data on reports of glaucoma in non-humans, we used IOP as a proxy to assess the chance of having glaucoma in non-human animals. IOP in humans is associated with many factors, such as blood pressure, age, diet, and genetics [10][11][12]. These traits vary across the animal kingdom. For example, giraffes (Giraffa camelopardalis) have a systolic blood pressure that can exceed 280 mmHg. On the other end of the spectrum, the range of normal systolic blood pressures in modern reptiles can range from 40-60 mmHg. Human systolic blood pressures, by way of contrast, are 120 mmHg [13,14]. The average lifespan of bowhead whales (Balaena mysticetus) is 211 years, while humans' average lifespan is 72 years, and small rodents, such as Chinese hamsters (Cricetulus griseus), live 2-3 years [15,16]. The variation in these traits and their correlations with IOP have rarely been investigated in non-humans. A comparative study of the association between these traits and IOP will allow us to better understand how animals may have adapted to survive under relatively high IOP levels. Research suggests that extreme recreational activities such as SCUBA diving and bungee jumping increases IOP and should be avoided in patients with glaucoma [17]. However, many non-humans engage in similarly intense activities as part of their daily lives. For instance, pinnipeds may engage in extremely deep foraging dives. Some pinniped species, such as elephant seals (Mirounga angustirostris), can dive more than 1500 m, which is almost 50 times deeper than humans can dive without specialized equipment [18]. Such diving is similar to SCUBA diving in humans, and a recent study suggests that some species of pinnipeds have glaucoma [19]. Some terrestrial birds, such as Rüppell's vultures (Gyps rueppellii), fly up to 11,300 m and must deal with substantially different air pressures on such flights [20]. Similarly, to bungee jumping, how can birds deal with this rapid change in air pressure, and does the abrupt change in air pressure affect their IOP? These questions could be addressed if we first understand the pattern and evolution of IOP in animals. We conducted a comparative phylogenetic study, to understand the evolution of IOP in mammals and birds. First, we mapped IOP traits on mammal and bird phylogenies and conducted ancestral state reconstruction to understand the evolutionary pattern of IOP. Second, we studied the correlation between life history traits and IOP. Due of the tight linkage between IOP and glaucoma in humans, the results of this exploratory research could improve our understanding of the susceptibility to, and protection from, glaucoma in animals. Comparative IOP Dataset We used the results of a previous systematic review of IOP and supplemented it with new searches. The initial IOP dataset in birds and mammals [21] contained IOP data found from searches of PubMed, Scopus, and BioOne databases, from their inception to 31 August 2015. We used the same search terms, and inclusion and exclusion criteria from the original data base and searched the Web of Science (all databases) from August 2015 to February 2020. Phylogenetic Tree Preparation We obtained phylogenetic trees of mammals and birds from http://vertlife.org/ (accessed on 27 February 2021). The VertLife database has two types of phylogenetic trees, a "complete" tree and a "sequenced species only" tree. Both types of trees provide different evolutionary relationships and tree topologies [22,23]. Therefore, we combined both types of trees to generate a maximum consensus tree. For mammalian trees, we combined 1000 mammal birth-death node-dated completed trees (5911 species) with 1000 mammal birth-death node-dated DNA-only trees (4098 species). For bird trees, we combined 1000 trees of "all species birds" with 1000 trees of "sequence species". After concatenating both datasets in each mammal and bird tree, we generated a maximum clade credibility (MCC) tree using the function maxCladeCred in phangorn R package (version 2.7.0) (R Core Team, Vienna, Austria) [24]. These mammalian and avian trees were used throughout the following analysis. Studying the Evolution of IOP To understand the evolution of IOP, we mapped the IOP and reconstructed the ancestral state of IOP on the mammal and bird MCC tree using a maximum-likelihood approach. As IOP is a continuous trait, we fitted three models of continuous trait evolution, prior to conducting an ancestral state reconstruction (ASR). The three models included: (1) a Brownian model, a model that explain trait evolution by random drift or multiple selective pressures [25]; (2) an early-burst model, a model that infers adaptive radiation by rapid change in traits early in time, which then slow as time progresses [26], and (3) an Ornstein-Uhlenbeck model, a model that occurs when there is stabilizing selection around an optimal value [27,28]. We conducted model fitting using the function fitContinuous in geiger R package (version 2.0.7) and compared the Akaike information criterion (AIC) score of these three models. We used the lowest AIC score to select which of the three models best-fit the trait variation. Then, we simulated how IOP evolved with model parameters obtained from model fitting-the σ 2 parameter for Brownian models and the α and θ parameters for OU models-for 100 generations in R, to visualize constraints on trait evolution. Finally, we mapped IOP data and implemented ancestral state reconstruction using the functions fastBM and contMap in the phytools R package (version 0.7-70) [29]. Studying Life History Correlates of IOP We explored life history traits that have been reported to be associated with an increase in IOP. Many studies revealed that body mass is slightly associated with IOP in both nonhumans and humans [21,30]. Therefore, we included average body mass (kg) in this study and hypothesized that body mass might be positively associated with IOP levels across mammals and birds. There is some discussion about whether SCUBA diving is associated with increased IOP in humans [17,31,32]. Do diving animals also experience high IOP, as in SCUBA divers? This question has never been addressed. We, therefore, included the habitat (aquatic/terrestrial) that animals inhabit and the maximum diving depth (meters) for diving species in this study. We postulated that aquatic mammals and birds might have higher IOPs than those found in terrestrial species. Blood pressure correlates with IOP in humans [33]. Patients with hypertension often have high IOP and a concomitant increased risk of glaucoma [34]. Moreover, hypertension is also associated with age, and older adults tend to have a higher risk of having hypertension and high IOP [35]. Hence, we included systolic blood pressure (mmHg), the pressure exerted when a heart beats, and maximum longevity (years) from captive animals as life history traits. Some complementary and alternative medicine studies identified a link between diet and IOP levels in humans who mainly consume fruits and vegetables [12,36]. Given the variety of diets in non-humans, we included diet type (herbivore, carnivore, and omnivore) as another life history trait. We obtained the life history trait information of the species for which we had IOP information from various databases. For average body mass (kg), habitat (aquatic, terrestrial), and diet type (herbivore, carnivore, omnivore), we used the information from https://animaldiversity.org (accessed on 1 March 2021). For maximum longevity (years), we obtained the information from the AnAge database (https://genomics.senescence.info/ species/index.html) (accessed on 1 June 2022), which reports maximum captive longevity. For average blood pressure (mmHg) and maximum diving depth (meters), we obtained the information from literature searching through Google Scholar and Web of Science. Since there are few data of avian blood pressure, we omitted the analysis of bird's blood pressure. In addition, the data from average body mass and maximum longevity were not normally distributed. We, therefore, transformed the data using log 10 transform prior to analyzing the data. As species share common ancestors, and as they might not have evolved independently [37], we then tested for a phylogenetic signal, a proxy of statistical dependence among species' traits, due to their phylogenetic relationship [38]. To do so, we fitted each life history trait with four different modes of evolution: (1) Brownian motion, (2) Ornstein-Uhlenbeck (OU), (3) Pagel's lambda, and (4) white noise (a non-phylogenetic model). We used the function gls in the nlme package in R and function corBrownian, corMartins, and corPagal as a correlation parameter for the first three models, respectively. For the white noise model, we used only the function gls. We then compared the AIC score of each model and interpreted the model with the lowest AIC. The type of tonometer (applanation and rebound tonometer) and sedation used during IOP measurements influence IOP [21,39]. However, our IOP data were obtained with various types of tonometer, and some of the IOPs were measured while animals were sedated. To control the effect of these two factors in our study, we used phylogenetic generalized least squares (PGLS) to investigate the effect of tonometer and sedation on IOPs in our dataset. In mammals, we a conducted PGLS analysis by fitting the type of tonometer and sedation with IOPs. However, in birds, we only fitted type of tonometer with IOPs, because most IOP data from birds were obtained without sedation. To investigate the relationship between life history traits and IOP, we fitted a PGLS model with log 10 (average body mass) because a preliminary analysis in mammals showed that IOP was correlated with log 10 (average body mass). Then we added each of the following variables to the log 10 (average body mass) only model: (1) habitat (aquatic, terrestrial), (2) diet type (herbivore, carnivore, omnivore), (3) average blood pressure, (4) log 10 (maximum longevity), and (5) maximum diving depth. For the bird group, log 10 (average body mass) did not correlate with IOP. Therefore, we fitted three PGLS models. The first model had three independent variables log 10 (average body mass), habitat, and diet type, and all data were available for all three traits. Since all data were not available for all species, we fitted two additional models using all the available data: (1) log 10 (maximum longevity), and (2) maximum diving depth. Evolution of IOP We studied IOP evolution using 63 species (28 families, 10 orders) of mammals and 43 species (13 families, 11 orders) of birds, and found that IOP levels in mammals and birds evolved under the Ornstein-Uhlenbeck model (Table 1). This result means that, rather than drifting around randomly, IOP evolved towards an optimum value. The optimum value (θ) of IOP in mammals and birds based on OU model fitting were 17.67 and 14.31 mmHg, respectively. To visualize this, Figures 1B and 2B and Figures 1C and 2C show the simulation of IOP evolution under Brownian and OU models, respectively. When considering the simulation from OU model, trait values at present were clustered more compared to the simulation from Brownian model, indicating "stabilizing selection". In humans, an IOP over 22 mmHg defines ocular hypertension [40]. In mammals, the ancestor state reconstructions suggested that higher IOPs evolved at least six times ( Figure 1A) independently. This can be seen two times in the order Artiodactyla (buffalos, and dolphins and porpoises), one time in the order Perissodactyla (rhinoceroses and horses), two times in the order Carnivora (pinnipeds, lions, and cheetahs), and once in the order Diprotodontia (koalas). While IOPs below 6.5 mmHg are considered to be pathologically low in humans, known as hypotony [41], we found that lower IOPs evolved at least two times independently in the order Rodentia (lowland pacas and hamsters). In birds, the ancestral state reconstruction suggested that birds initially had a low IOP, and that higher IOPs independently evolved at least two times (Figure 2A), appearing one time in the order Sphenosciformes (penguins) and one time in the order Accipitridae (eagles). In birds, the ancestral state reconstruction suggested that birds initially had a low IOP, and that higher IOPs independently evolved at least two times (Figure 2A), appearing one time in the order Sphenosciformes (penguins) and one time in the order Accipitridae (eagles). Life History Correlates of IOP Even though much research reports that type of tonometer and sedation influence IOP levels, our PGLS analysis showed that in our IOP dataset, neither the type of tonometer (p = 0.219), nor sedation (p = 0.129) explained the variation in IOP in mammals (Table 2). Similarly, variation in IOP in our avian dataset was not explained by type of tonometer (p = 0.891) ( Table 3). Life history Correlates of IOP Even though much research reports that type of tonometer and sedation influence IOP levels, our PGLS analysis showed that in our IOP dataset, neither the type of tonometer (p = 0.219), nor sedation (p = 0.129) explained the variation in IOP in mammals (Table 2). Similarly, variation in IOP in our avian dataset was not explained by type of tonometer (p = 0.891) ( Table 3). In addition, our results revealed that IOP levels increased in mammals in association with body mass (p < 0.01) ( Table 2). However, this association did not exist in birds (p = 0.678) ( Table 3). Even though the results from non-PGLS in mammals showed that habitat (p < 0.01) and diet type (p < 0.01) were associated with IOP, our best supported models were those that incorporated phylogenetic information (Supplementary Table S1). When considering the PGLS results, we found no association between the aforementioned traits (including average blood pressure) and IOP (Table 2). In birds, only the first model did not require phylogenetic information, and in this model, we found an association between habitat and IOP (p = 0.01) in birds. Aquatic birds that dive, such as penguins, have higher IOPs than non-diving birds (Table 3). We also found no relationship between log 10 (average body mass) (p = 0.678) or diet type (p = 0.806 and p = 0.537) and IOP. The other two best-supported models were those with phylogenetic information. In these models, we found no relationship between IOP and log 10 (maximum longevity) (p = 0.708) or maximum diving depth (p = 0.743) ( Table 3). Discussion and Conclusions Our macroevolutionary study revealed that IOP, a trait that results from a combination of anatomical and physiological processes, evolved under the OU model in both mammalian and avian clades. This indicates that IOP may be under stabilizing selection. Based on the available data, our results reveal that the optimal IOP values in mammals and birds are 17.67 and 14.31 mmHg, respectively. These optimal values fall within the range of healthy IOP values in humans [40] and some non-human species [42]. The interspecific variation in IOP levels is notable, because the IOP levels in some species might be sufficient to induce pathology in humans. This suggests that evolved adaptations may exist in some species that confer resistance to glaucoma. Zouache et al. found that IOP levels increased with the evolution of terrestrial animals [21]. This may have been influenced by the transition from lens-based optics in aquatic animals to cornea-based optics in terrestrial animals. This leads to an additional question: What are the IOPs levels of marine mammals, whose terrestrial ancestors "returned" to the water over 30 million years ago? Even though we found no association between IOP and habitat in mammals, our comparative study revealed that most marine mammals have a higher IOP (25-32.8 mmHg) than their ancestors ( Figure 1A). This is similar to seabirds, which have IOPs ranging from 20.36-28.18 mmHg (Figure 2A). Nevertheless, the IOP data we have were measured when the animals were on land. Research on cetaceans found that when whales dive, they can reduce their heart rate (bradycardia) to 4-8 beats per minute (bpm) and increase it to 40 bpm at the surface (tachycardia) [43]. Emperor penguins have a similar diving response, where their heart rate is reduced to 5-10 bpm at the bottom of the ocean, from 200-240 bpm at the sea surface [44]. Since blood pressure is associated with elevated IOP in humans, this might be one of the possible mechanisms in marine mammals and seabirds to reduce IOP while diving underwater with a high atmospheric pressure. However, more research, and specifically measurements of IOP conducted under water, is needed to evaluate this hypothesis. Our research has a number of limitations. First, there are limited IOP data available for mammals and birds. We only have IOP data representing 3% of the total mammalian species and less than 1% of the total avian species. This small data set might affect statistical analysis and may lead to an incomplete interpretation. Moreover, IOP fluctuates within individuals; volume status, developmental phase, and other factors are among the influences of these shifts. Consequently, an analysis of mean IOP values cannot capture a full picture of IOP. Second, because health status was not necessarily available, our inclusion criteria [21] for IOP values may have included some individuals with underlying ocular or other pathologies. However, our analyses are based on the assumptions that most of measured IOPs represented a "normal" state rather than a pathological one. A third concern is the inconsistency of the tonometer that was used to measure the IOP. There are two main types of tonometry, rebound tonometry and applanation tonometry, and these methods may lead to different measurements [21]. This might increase the variance in the estimates IOP. Even though the type of tonometer did not explain variation in our IOP dataset, future studies should be aware of this issue when conducting comparative studies between multiple species. Finally, corneal stiffness (e.g., central cornea thickness) has been shown to affect IOP levels when measuring IOP with a tonometer [45]. When measuring IOP with the same tonometer, thinner corneas give lower IOPs compared to thicker corneas [46]. Our study could not control for this factor because of the lack of corneal stiffness information in non-human animals. Thereby, it is essential to obtain corneal stiffness information while collecting IOP data in future studies, especially in wildlife. Since this was exploratory research, we suggest that more IOP data in other non-humans are needed to better understand the evolution of IOP and glaucoma in vertebrates. The results from our study provide information regarding IOP as a species-specific characteristic and an understanding of the evolutionary pattern of IOP in mammals and birds. Future studies are needed to investigate how varying selective pressures shaped the morphological and physiological mechanisms that underlie IOP. In addition, the Krogh principle states that "for a large number of problems, there will be some animal of choice, or a few such animals, on which it can be most conveniently studied" [47]. We suggest that identifying species with higher IOPs could be a source of insights for developing novel approaches to glaucoma treatment and prevention. How, for example, do animals with IOPs that fall outside of the optimum value range, including cetaceans and rhinoceros, remain glaucoma-free? Awareness of species with normal visual function, despite higher IOPs, may spark interest in understanding the mechanisms that appear to confer this resistance. Moreover, since our research provides information regarding the macroevolution of IOP, future research could include a comparative genomic study of genes that might lead to elevated IOP or cause glaucoma across vertebrates. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ani12162027/s1, Table S1: Comparison of phylogenetically signal of each model in mammals and birds using three modes of evolution (Brownian, Pagel's lambda, Early-burst, and Ornstein-Uhlenbeck) and non-PGLS model. Bold are the models with lowest AIC value. Table S2
5,432
2022-08-01T00:00:00.000
[ "Biology" ]
Analysis of Space-Based Observed Infrared Characteristics of Aircraft in the Air : The space-based infrared observatory of aircraft in the air has the advantages of wide-area, full-time, and passive detection. The optical design parameters for space-based infrared sensors strongly rely on target observed radiation, but there is still a lack of insight into the causes of aircraft observation properties and the impact of instrument performance. A simulation model of space-based observed aircraft infrared characteristics was constructed for this provision, coupling the aircraft radiance with background radiance and instrument performance effects. It was validated by comparing the model predictions to data from both space-based and ground-based measurements. The validation results reveal the alignment between measurements and model predictions and the dependence of overall model accuracy on the background. Based on simulations, the radiance contributions of aircraft and background are quantitatively evaluated, and the detection spectral window for flying aircraft and its causes are discussed in association with instrumental performance effects. The analysis results indicate that the target-background (T-B) contrast is higher in the spectral ranges where aircraft radiation makes an important contribution. The background radiance plays a significant role overall, while the observed radiance at 2.5–3 µ m is mainly from skin reflection and plume radiance. The skin-reflected radiation absence affects the model reliability, and its reduction at nighttime reduces the T-B contrast. The difference in T-B self-radiation and the stronger atmospheric attenuation for background contribute to the higher contrast at 2.7 µ m compared to the other spectral bands. Introduction The airplane's invention revolutionized how humans travel, connecting faraway places worldwide.The state of the aircraft during navigation has received extensive attention [1], which is the need for economic development and the focus of national defense construction.In today's complex aviation environment, no single technology can yet track all aircraft types in terms of global coverage [2].Space-based infrared imaging enables the acquisition of aircraft location information on a global scale without time constraints.There is currently no on-orbit infrared instrument designed for flying aircraft detection.The optical design parameters for space-based infrared sensors and the development of detection algorithms strongly rely on target observed radiation.However, there is still a lack of insight into the causes of aircraft observation properties and the effects of instrument performance. The measurements can provide real infrared data of the aircraft under certain conditions, but the cost is very high.Accordingly, infrared target simulation modeling has been the focus of research in the military field for a long time [3][4][5][6][7].Battlefield requirements change with the development of technology.Mahulikar et al. [8][9][10] discussed the relationship between the infrared radiation level and the locking distance for the infrared modeling of military aircraft and proposed the concept of the infrared cross-section.Coiro [11][12][13] conducted infrared simulation modeling and sensitivity analysis of civil airplanes and unmanned combat air vehicles.The above methods are mainly used for air-to-air or ground-to-air detection scenarios and need to be refined for use in aircraft detectability assessments under space-based infrared observations.Yuan [14] proposed a multispectral integrated model for sea/cloud background radiation characteristics and analyzed the detection performance for the aircraft plume.Zhu [15] established an all-attitude motion characterization and parameter analysis system for aerial targets.These simulation models do not comprehensively account for the aircraft's observed radiation and lack preliminary validation work under the space-based infrared observation.Parts of the radiation were ignored, such as skin or background radiation.Accordingly, it is essential to establish and initially validate a space-based observed aircraft infrared characteristic model, coupling target radiation with background radiation and instrument performance effects. In the application analysis of the simulation model, Mahulikar et al. [16][17][18][19][20] analyzed the influence of the internal/external radiation sources, the nozzle area, and the skin emissivity on the detection distance and carried out the optimization of the skin emissivity.The sources of infrared characteristic radiation from aircraft in the wide band (3-5 µm and 8-14 µm) are studied [21].The concept of modulating the infrared characteristics of an aircraft based on the skin anisotropic emissive characteristics was proposed and verified by simulation [22].Yuan [23] used the signal-to-noise ratio (SNR) and detection distance to analyze the detection capability of the geostationary infrared imaging system for aircraft plumes.Zhu and Yu et al. [24,25] used the comprehensive signal-to-noise ratio (CSNR) to evaluate the detection performance of infrared systems for the optimization of key parameters.In this literature, metrics such as SNR and CSNR were used to evaluate the detectability of infrared detection systems.Still, they cannot directly tell the influence of infrared radiation characteristics of the target, the background, and the system performance [26].However, there is still a lack of contribution evaluation of aircraft skin emission radiation, reflected radiation, plume radiation, and background radiation to space-based observation radiation.This is not conducive for an insight into the space-based infrared observational properties of the aircraft and the reasons for their formation. The ground sampling distance (GSD), the modulation transfer function (MTF), the spectral response function (SRF), and the noise equivalent temperature difference (NE∆T) are important performance parameters of the infrared system and also widely known parameters in data applications.In the prior literature [23,24,27], the GSD, detection pixel size, and spectral band of the space-based infrared detection system were studied and analyzed.It was agreed that narrow bands outperformed wide bands [14], with spectral bands such as 2.65-2.9µm and 4.25-4.5 µm considered spectral detection windows for aerial targets.However, these studies focused more on the results or phenomena of optimization than the causes of the characteristic spectral bands and the effects of instrument performance on the relative difference between the target and background. In view of the above, this paper builds a simulation model of space-based observed aircraft infrared characteristics and uses B7-12 of Gaofen-5 (GF-5) visual and infrared multispectral sensor (VIMS) [28] data and ground-measured plume data [6] for preliminary validation.A radiative contribution evaluation was carried out to discuss the contributions of background radiation, skin emission radiation, skin reflection radiation, and plume radiation at body-leaving and at-sensor radiance, as well as to analyze the effects of diurnal variation and spatial resolution on the radiative contribution.Then, the effect of instrument performance on the T-B contrast was discussed, and the formation of characteristic spectral bands was analyzed in conjunction with the radiance contribution evaluation and atmospheric attenuation effects.In Section 2, we described the simulation methodology, the flow, and the case of validation and analysis.In Section 3, simulated results were compared with space-based and ground-based data for validation.The contribution of aircraft and background radiation to the body-leaving and at-sensor radiance, and the effect of instrument performance on the T-B contrast, were then assessed.Lastly, the discussions and conclusions are provided in Sections 4 and 5, respectively. Observed Radiance of Aircraft by Space-based Sensor Within the instrument's linear response range, the measured radiance (also known as restored at-sensor radiance or restored onboard radiance for satellite remote sensing) can be assumed to be the result of a convolution process using an abstract concept of a sensor's equivalent response function and a random noise process, as shown in Formula (1) [29]. where L res_TOA and L TOA are the restored at-sensor radiance and the true top of atmosphere (TOA) radiance, respectively; R sensor is the sensor equivalent response, including the spatial imaging degradation and the spectral response, L noise is the effective instrument noise radiance.Due to the long observation distance, the solid angle of the aircraft is usually smaller than the instantaneous field of view, so the aircraft is a sub-pixel target in the space-based imaging system [14,27].Then, the aircraft radiation signal observed by the space-based sensor includes aircraft and background radiation.The TOA radiance, including the target and background signals, can be expressed in Formula (2).It is generally believed that the infrared radiation of aircraft mainly comes from skin and plume emission radiation [27].Besides, the aircraft skin reflected radiance is also taken into account in this paper.The plume is a block of high-temperature gas mass, and the true observed radiation is the coupling of the gas emission radiance and the background or nozzle radiance, as shown in Formula (3). where S Skin , S Plume and S NP are the projected areas of the skin, the plume and the nozzle in the observation direction respectively, assuming that S Tar = S Skin + S Plume ; L E_skin and L R_skin are the emitted radiance and reflected radiance of the aircraft skin, respectively; L Plume , L E_plume , L Nozzle and L H Bkg are plume equivalent radiance, plume gas emission radiance, nozzle emission radiance, and background radiance at the flight altitude, respectively; τ Plume is the abstract transmittance of the plume gas. The restored onboard radiance of the aircraft under the space-based infrared observation can be obtained by substituting Formulas (2) and (3) into Formula (1), as shown in Formula (4).The TOA radiance signal can be simplified into five parts: skin emission radiation, skin reflection radiation, plume radiation, background radiation, and atmospheric path radiation, as shown in Figure 1. is the abstract transmittance of the plume gas.The restored onboard radiance of the aircraft under the space-based infrared observation can be obtained by substituting Formulas (2) and (3) into Formula (1), as shown in Formula (4).The TOA radiance signal can be simplified into five parts: skin emission radiation, skin reflection radiation, plume radiation, background radiation, and atmospheric path radiation, as shown in Figure 1. The Flow and Metrics of Analysis Space-based observed aircraft infrared radiation is affected by aircraft radiation, background radiation, atmospheric effects, and instrument performance.To further investigate the causes of space-based observed aircraft infrared radiation, a quantitative assessment of the various impact factors should be carried out.As shown in Figure 2, the research framework in this paper is as follows: Firstly, a model of aircraft space-based infrared observation radiation was developed, coupling aircraft radiation, background radiation, atmospheric effects, and instrument performance characteristics.Preliminary validations were carried out using space-based data and plume static measurement data.Then, based on the simulation model, the contributions of the skin emitted/reflected radiation, plume radiation, background radiation, and path radiation were calculated and evaluated at the aircraft body-leaving radiance and at-sensor radiance.Finally, the effects The Flow and Metrics of Analysis Space-based observed aircraft infrared radiation is affected by aircraft radiation, background radiation, atmospheric effects, and instrument performance.To further investigate the causes of space-based observed aircraft infrared radiation, a quantitative assessment of the various impact factors should be carried out.As shown in Figure 2, the research framework in this paper is as follows: Firstly, a model of aircraft space-based infrared observation radiation was developed, coupling aircraft radiation, background radiation, atmospheric effects, and instrument performance characteristics.Preliminary validations were carried out using space-based data and plume static measurement data.Then, based on the simulation model, the contributions of the skin emitted/reflected radiation, plume radiation, background radiation, and path radiation were calculated and evaluated at the aircraft body-leaving radiance and at-sensor radiance.Finally, the effects of GSD, MTF, SRF, and noise on T-B contrast were analyzed regarding the current level of instrument performance. of GSD, MTF, SRF, and noise on T-B contrast were analyzed regarding the current level of instrument performance.The simulation results were compared with the restored onboard radiance [29].Then the absolute error (AE) and relative error (RE) were adopted to evaluate the simulation accuracy, as shown in Formulas ( 5) and ( 6) [29].T-B contrast was adopted to assess the relative difference and relationship between the aircraft observed radiance and the background radiance.The contrast can be calculated by Formula (7) [30], and a positive number means that the aircraft is brighter than the background.The light-dark relationship in the simulation data was also used as a model reliability assessment metric.Another metric to assess the model's reliability is the consistency of the T-B light-dark relationship between measurements and predictions.The absolute mean values of these metrics were likewise calculated, as shown in Formula (8).The simulation results were compared with the restored onboard radiance [29].Then the absolute error (AE) and relative error (RE) were adopted to evaluate the simulation accuracy, as shown in Formulas ( 5) and ( 6) [29].T-B contrast was adopted to assess the relative difference and relationship between the aircraft observed radiance and the background radiance.The contrast can be calculated by Formula (7) [30], and a positive number means that the aircraft is brighter than the background.The light-dark relationship in the simulation data was also used as a model reliability assessment metric.Another metric to assess the model's reliability is the consistency of the T-B light-dark relationship between measurements and predictions.The absolute mean values of these metrics were likewise calculated, as shown in Formula (8). where AE i , RE i , CR i , L SI MU ,i and L ONBOARD,i are the absolute error, relative error, T-B contrast, simulated radiance and restored onboard radiance of the i-band, respectively; L Bkg,i is the background radiance; L TB,i is the radiance of the pixel containing the target and background; MRE, MAE and MCR are the mean relative error, the mean absolute error and the mean contrast ratio, respectively; n is the number of bands. To validate the accuracy of pure aircraft (without background) simulation and its influence on the reliability of the overall simulation model, an attempt was made to separate the influence of background radiance.Assuming that the measured signal of the aircraft is linearly mixed from the pure target signal and the background signal at the pupil, it can be expressed as Formula (9). where ξ TB is the aircraft observed signal, including both aircraft and background signals; ξ Toa Bkg and ξ Toa Tar are the pure background and aircraft signals at the TOA, respectively; F is the aircraft signal factor, describing the contribution ratio of the target signal to the observed signal.The measured background radiance and the aircraft projected area ratio S Tar /d 2 can be used to estimate ξ Toa Bkg and F for the calculation of ξ Toa Tar , which can be compared with the simulation results of the aircraft without the background. The spectral relative contribution (RC) of the aircraft skin emission radiance, reflected radiance, plume radiance, background radiance, and path radiance to the total body-leaving radiance and at-sensor radiance for the aircraft are calculated separately by Formula (10).In particular, the path radiance accounts for the flight altitude to the top of the atmosphere, which mainly affects the at-sensor radiance.The analysis of each component's relative contribution provides further insight into the role of each radiation source in the different spectral bands and the effect of atmospheric attenuation. where L component presents radiance components, including the skin emission/reflection radiance, plume radiance, background radiance and path radiance (only for at-sensor radiance) observed at the body-leaving radiance and at-sensor radiance; L Tot is the total radiance, including the total body-leaving radiance and at-sensor radiance of aircraft.These coefficients can be derived by Formulas (3) and ( 4).The spectral T-B contrast under different instrument performances was calculated to evaluate each performance parameter's effect.Spectral bands with significant and prominent contrast in the target background were selected for further analysis.Finally, in conjunction with the analysis of instrument performance and radiance contributions, the causes of contrast in the characteristic spectral bands were discussed, and the advantages and disadvantages of each of these bands were compared. Simulation Modeling 2.3.1. Skin Radiance Aircraft skin radiance included skin emission and reflected radiance.The airframe is usually made of metal and coated, of which emitted radiance can be calculated using Planck's formula, as shown in Formula (11). where ε is the emissivity of the skin; M BB is the blackbody irradiance; T is the temperature; h is the Planck constant; c is the speed of light; λ is the wavelength, and k is the Boltzmann constant. During navigation, the aircraft skin temperature is mainly influenced by atmospheric aerodynamic heating, and the heating effect of solar radiation is smaller and can be neglected [17].The stagnation temperature is used to estimate the skin temperature, and the calculation formula can be expressed as in Formula (12) [30]. where T s is the skin temperature; T 0 is the temperature of the atmosphere around the aircraft; r is the recovery coefficient; γ is the specific heat ratio; Ma is the Mach number of flight. In space-based observation, the skin-reflected radiance mainly considers the direct solar radiance and its scattered radiance, the cloud radiance, and the atmospheric thermal radiance.Defining the latter three as sky radiance, the skin body-leaving radiance can be expressed as: where ρ is the skin reflectivity, and ρ + ε = 1; L sd and L ↓ sky are direct solar radiance and sky downward radiance on the aircraft skin, respectively.Direct solar radiance and sky radiance can be estimated using direct solar irradiance E sd and atmosphere downward diffused irradiance E ↓ sky at a specified horizontal height, which can be derived from "flx" files of MODTRAN [31].Due to the long distance of space-based observation, the aircraft shape can be simplified to calculate the projected area of the aircraft skin [15]. Plume Radiance Aircraft nozzle radiance was considered grey body radiation with an emissivity of about 0.9 [10], which can be calculated using Planck's formula.The plume, a non-uniformly distributed high-temperature gas, differs from the gray-body radiation characteristics of the surface.Its emission and absorption effects should be considered, and the radiative transfer equation can be expressed as Formula ( 14) [32]. where κ a is the absorption coefficient; L is the local radiance, L BB is the blackbody radiance, s and → s denote the position and optical path vector, respectively.The LOS method [33] was applied in this study to solve the radiative transfer equation.The non-uniform gas in a line of sight (LOS) is uniformly divided into multiple layers, as shown in Figure 3. Plume gas radiation of a LOS can be expressed as Formula (15).The nozzle or background radiance coupled with the plume gas can be calculated as Formulas ( 16) and (17), respectively.Thereby, the plume radiation intensity in Equation ( 3) can be shown as Formula (18). where L i BB is the blackbody radiance of the ith slab; τ i denotes the transmissivity of the ith slab; (1 − τ i ) is the emissivity of the ith slab, and n represents the number of stratified layers; N is the total number of LOS intersecting the plume; M is the number of LOS intersecting both the nozzle and plume; ∆d is the spatial sampling interval of LOS, and the projected area of the plume in the observation direction can be expressed as where BB L is the blackbody radiance of the ith slab; i τ denotes the transmissivity of the ith slab; (1 ) is the emissivity of the ith slab, and n represents the number of stratified layers; N is the total number of LOS intersecting the plume; M is the number of LOS intersecting both the nozzle and plume; d Δ is the spatial sampling interval of LOS, and the projected area of the plume in the observation direction can be expressed as The plume fluid field calculation aims to obtain the gas temperature, pressure distribution, and species content.The computational methods have been divided into two categories.One is the simplified model that uses empirical or semi-empirical formulations to obtain the plume fluid field [34,35].The other is the computational fluid dynamics that solves the Navier-Stokes equation to derive the plume fluid field [6,32,36].In contrast to the simplified model, the latter can obtain a fine plume fluid field; however, it requires tedious geometric model construction, meshing, and other manual involvement processes, which costs a large number of computational resources and time.Hence, a simplified model [35] is adopted to complete the calculation of the fluid field distribution. The absorption coefficients of each species within the specified wavenumber η and temperature intervals can be calculated using the line-by-line method [37] with the aid of the high-temperature database HITEMP [38]. ( ) ( , ) ( ) where ( , ) The plume fluid field calculation aims to obtain the gas temperature, pressure distribution, and species content.The computational methods have been divided into two categories.One is the simplified model that uses empirical or semi-empirical formulations to obtain the plume fluid field [34,35].The other is the computational fluid dynamics that solves the Navier-Stokes equation to derive the plume fluid field [6,32,36].In contrast to the simplified model, the latter can obtain a fine plume fluid field; however, it requires tedious geometric model construction, meshing, and other manual involvement processes, which costs a large number of computational resources and time.Hence, a simplified model [35] is adopted to complete the calculation of the fluid field distribution. The absorption coefficients of each species within the specified wavenumber η and temperature intervals can be calculated using the line-by-line method [37] with the aid of the high-temperature database HITEMP [38]. where S i (η, T) is the line intensity of the ith spectral line at a given wavenumber when the temperature is T; F(η − η 0i ) is the line shape function of the ith spectral line, usually using the Voigt line function, and η 0i is the central wave number of the ith spectral line.The spectral line intensity S i (η, T) can be derived by extrapolation from the reference state line intensity S i (η, T re f ), as shown in Formula (20). where T re f is the reference temperature; Q represents the partition function; E is the lower-state energy of the transition. Background Radiation Calculation and Instrument Performance Simulation The background at-sensor radiance can be expressed as Formula (21) [39], considering the atmospheric adjacency effect with the assumption of a flat subsurface.In the infrared band (>2.5 µm), the atmospheric adjacency effect contributes a relatively small amount of radiation to the at-sensor radiance [29].Thus, the scattered or reflected radiance from ground thermal radiation influenced by adjacency effects is neglected, as shown in Formula (22).The unknown quantities can be calculated by calling MODTRAN several times [40]. where A and A are coefficients describing solar radiance and atmospheric thermal radiance entering the sensor after reflection from the image-pixel surface, respectively; B and B are coefficients describing solar radiation and atmospheric thermal radiation reflected by the area-averaged ground surface into the sensor, respectively; C and C are coefficients describing solar radiation and atmospheric thermal radiation reaching the sensor after scattering in the atmosphere alone, respectively; S represents the atmospheric spherical albedo; ε t and ε b denotes the emissivity of the image-pixel surface and the area-averaged ground surface, respectively; τ d and τ s are the direct transmission along the line of sight and the effective transmission along the scattering path, respectively; T t and T b represent the image-pixel surface and area-averaged ground surface temperatures, respectively; In the case of cloud scenes, the physical parameters of the clouds are considered to be uniformly distributed horizontally within a single pixel, and the cloud radiance is acquired by setting the cloud parameters by "ICLD", "CTHIK", "CALT" and "CEXT" in the MODTRAN.Instrument performance simulations are achieved through spatial imaging degradation, spectral response and noise superimposition.Spatial degradation includes sampling interval degradation and imaging blur, τ h→toa atm L Tar S Tar + L h Bkg S Bkg /d 2 in (2) describes the process of spatial resampling, which can be done by setting the GSD to simulate instrument sampling interval degradation.Imaging blur is caused by factors such as the external imaging environment and the imaging capability of the instrument.The sub-pixel target energy appears distributed into several surrounding pixels [14,27].During instrumental imaging, the imaging blur can be seen as low-pass filtering of the full imaging chain on the ground scene, evaluated by the MTF or the PSF.Accordingly, the spatial domain imaging blur can be expressed as (23).A Gaussian function (24) is usually employed to fit the point spread function, and the MTF is the modulo of the PSF after the Fourier transform.PSF can be calculated by Formula (25). The spectral response function is commonly applied to describe an instrument's spectral response characteristics [41].The effective spectral radiance obtained by the sensor is considered a weighted average of the continuous radiance spectrum and the spectral response function, as shown in Formula (26).Gaussian functions are often used to fit spectral response functions, where a spectral response curve can be solved for a given central wavelength and full width at half maximum (FWHM) by Formulas ( 27) and (28). where L SRF_i , SRF i , CW L i , FW HM i are the spectral radiance, spectral response function and its central wavelength and full width at half maximum of the i-band, respectively.Noise superposition is achieved based on noise equivalent radiance [42], which is modeled by adding a normally distributed random number to each band radiance, as shown in Formula ( 29).The NE∆T is applied to gauge the noise level of the instrument.Each band's noise equivalent radiance (NER) can be calculated from the NE∆T given at the instrument's design, as shown in Formula (30). where L n (i) denotes the i-band noise radiance; T B is the blackbody temperature that NE∆T is defined, and ∆T is the Planck function linear temperature difference. Materials for Simulation Case Study The space-based observed aircraft infrared characteristics model obtains the simulated aircraft observation radiance by inputting information such as ground reflectivity, temperature, and aircraft parameters.Two simulation experiments were designed to validate the model's fidelity; one using satellite-based data to validate the space-based observation simulation model for aircraft; and the other using ground-based measurements of the plume to validate the accuracy of the plume modeling, complementing the former.The simulation time and aircraft parameters from the validation of the space-based observation model were also used in the evaluation of the aircraft observed radiance contribution and the impact of instrument performance. The onboard data gathered by the VIMS of GF-5 satellite were selected as a reference for the simulation validation, mainly to verify the simulation accuracy in the infrared segment.The VIMS provides images in 12 bands from the visible to thermal infrared, six of which were used for validation in this study.The central latitude and longitude of the chosen data are 32.212895• N and 126.392002 • E, located in the East China Sea and imaged on 25 June 2019, with the specific parameters shown in Table 1. The aircraft's position in the data is shown in Figure 4a, and the aircraft is located above the cirrus, followed by the contrail.The Flightradar24 historical data query shows that the aircraft is a Boeing 777-246, flying from Tokyo, Japan, to Shanghai, China, with an altitude of 12,192 m and a flight speed of 218.64 m/s.The specific parameters are shown in Table 2.The aircraft's position in the data is shown in Figure 4a, and the aircraft is located above the cirrus, followed by the contrail.The Flightradar24 historical data query shows that the aircraft is a Boeing 777-246, flying from Tokyo, Japan, to Shanghai, China, with an altitude of 12,192 m and a flight speed of 218.64 m/s.The specific parameters are shown in Table 2.As shown in Figure 4c, it is not guaranteed that the aircraft is in a given pixel, and its position is inconsistent between different bands due to inter-band offsets.Accordingly, the target signal needs to be extracted band by band.To ensure that the aircraft signals are within a single pixel for simulation validation, the 3 × 3 and 4 × 4 pixel sizes were merged (shown in the black box in Figure 4c) to obtain the mixed signals at 120 m and 160 m spatial resolution. The sea surface temperature was obtained from the SST CCI data of ECMWF [43] as 295 K.The sea surface reflectance was used from the ECOSTRESS spectral library [44], and the skin emissivity was set to 0.6 [45].The aircraft skin is considered a Lambertian body with emissivity and reflectivity summed into one.The cloud thickness at the aircraft location was inverted using MODTRAN and the measured cloud data (average of the green areas in Figure 4a).Under the cirrus assumption, the cloud base and top altitudes are 8.1 and 9.2 km, at which point the simulation results are closest to the measured results. The main purpose of the simulation validation experiment was to examine the ability to simulate the space-based observational characteristics of the pixel containing the aircraft, and the spectral response was considered to describe the VIMS instrument performance.The spectral response functions of the B7-B8 were generated based on the spectral ranges given in Table 1 with Gaussian functions, and the B9-B12 spectral response functions were provided by the Numerical Weather Prediction Satellite Application Facility [46], as shown in Figure 4b. As the VIMS cannot capture the infrared spectral characteristics of the aircraft plume, such as 4.2 µm, the Swedish Defense Research Agency's engine plume measurements [6] were applied to validate the aircraft plume model.The plume simulation considered the effect of CO 2 and H 2 O, with the gas velocity of Mach 0.6, the ambient atmospheric temperature of 290 K, the atmospheric pressure of 101 kPa, air humidity of 35% and detection distance of 20 m perpendicular to the plume.Horizontal path atmospheric attenuation and path radiation were also considered. Space-Based Simulation Validation The simulation results of the aircraft-observed radiation for space-based infrared observations were validated by the B7-12 data from VIMS, as shown in Figure 5.The background (green box in Figure 4a) spectral mean and its distribution were calculated and compared with the simulation results, as shown in Figure 5a.The 3 × 3 and 4 × 4 pixel size resampling were selected to extract the aircraft observed radiance spectra (containing both aircraft and background), which were compared with the simulation results as in Figure 5b.The comparison results of aircraft and background spectra curves are given in Figure 5c, d.The results show that the measured radiance and T-B relationship agree with the established model. In order to quantify the errors between simulations and measurements, the relative and absolute errors at different spatial scales and the T-B contrast were calculated, respectively, as shown in Table 3.The MRE of the aircraft observation characteristics simulations for 3 × 3 and 4 × 4 pixel sizes are 8.32% and 6.42%.The maximum contribution of RE is the B7 band (−28.46%,−20.40%), which has low radiance with sensitivity to errors, as corroborated by the AE.Compared with the simulation accuracy of aircraft observation characteristics, the RE of pure aircraft simulation is larger.The MREs of pure target simulation are 71.22% and 56.71%, and the maximum contribution of RE is B7 (161.95%,123.92%).Besides, the simulated and measured cloud backgrounds were also compared as shown in Figure 5a, with an MRE of 4.52%. Space-Based Simulation Validation The simulation results of the aircraft-observed radiation for space-based infrared observations were validated by the B7-12 data from VIMS, as shown in Figure 5.The background (green box in Figure 4a) spectral mean and its distribution were calculated and compared with the simulation results, as shown in Figure 5a.The 3 × 3 and 4 × 4 pixel size resampling were selected to extract the aircraft observed radiance spectra (containing both aircraft and background), which were compared with the simulation results as in Figure 5b.The comparison results of aircraft and background spectra curves are given in Figure 5c, d In order to quantify the errors between simulations and measurements, the relative and absolute errors at different spatial scales and the T-B contrast were calculated, respectively, as shown in Table 3.The MRE of the aircraft observation characteristics simulations for 3 × 3 and 4 × 4 pixel sizes are 8.32% and 6.42%.The maximum contribution of RE is the B7 band (−28.46%,−20.40%), which has low radiance with sensitivity to errors, as corroborated by the AE.Compared with the simulation accuracy of aircraft observation characteristics, the RE of pure aircraft simulation is larger.The MREs of pure target simulation are 71.22% and 56.71%, and the maximum contribution of RE is B7 (161.95%,123.92%).This phenomenon indicates that the accuracy of the overall model is more dependent on the background simulation because the aircraft contribution is smaller.Therefore, the MRE decreases and converges to the background simulation accuracy (MRE of 4.52%), along with the spatial resolution decreases.Meanwhile, it can be found that the simulation accuracy of pure aircraft also changes with the scale (it should not change theoretically), which shows a deviation in the estimation of the aircraft signal factor.There may be two reasons for this deviation.One is that there are unknowns or deviations in aircraft parameters, such as the observation angle of the aircraft, the actual size, etc., resulting in the inability to estimate the aircraft signal factor effectively; the other is that the aircraft projected area ratio may introduce errors. Objectively, most of the research on infrared signature analysis is controlled by military research institutions with limited details in the open literature [47].The lack of target-related data, especially measured space-based infrared data for the aircraft, is an important factor restricting the validation and improvement of space-based infrared imaging simulation models.Accordingly, the consistency of the T-B relationship between measurements and predictions can also be used to evaluate the reliability of the simulation model.The simulation and the measured T-B contrast results show that the aircraft observed radiance in the B7 band is higher than the background (brighter than the background), while the opposite is true in other bands.This phenomenon indicates alignment between the simulated and measured T-B relative relationships. Plume Simulation Validation Plume infrared characteristic measurement experiments were carried out on the engine test stand [6].The results were used to validate the plume simulation accuracy and to supplement the space-based observation simulation validation.The comparison between the measured and simulated results is shown in Figure 6.The MRE in the 4.1-5 µm was calculated to be approximately 61.64% (excluding the position of strong atmospheric absorption).The results show a good agreement between simulation and measurement, with the same spectral characteristics.The partially unknown parameters of the experimental environment have caused errors between the simulation and the experiment.In the reference [4], the spectral radiation intensity is accurate up to 50% after considering all uncertainties associated with the input.By contrast, the simulation accuracy in the paper has been able to achieve a relatively good result with the condition of unknown input parameter uncertainty. Evaluation of the Aircraft and Background Contribution to the Observed Radiance It is widely recognized that background radiation affects the characteristics of aircraft space-based observations, especially at low spatial resolutions where the target features are not obvious mixed with the background radiation.However, it still lacks a quantitative analysis of background and aircraft contributions to observed radiation gathered by space-based infrared sensors.The contributions of each radiance component at 2.5-13 μm were calculated for the body-leaving radiance and TOA radiance at different spatial reso- The results show a good agreement between simulation and measurement, with the same spectral characteristics.The partially unknown parameters of the experimental environment have caused errors between the simulation and the experiment.In the reference [4], the spectral radiation intensity is accurate up to 50% after considering all uncertainties associated with the input.By contrast, the simulation accuracy in the paper has been able to achieve a relatively good result with the condition of unknown input parameter uncertainty. Evaluation of the Aircraft and Background Contribution to the Observed Radiance It is widely recognized that background radiation affects the characteristics of aircraft space-based observations, especially at low spatial resolutions where the target features are not obvious mixed with the background radiation.However, it still lacks a quantitative analysis of background and aircraft contributions to observed radiation gathered by spacebased infrared sensors.The contributions of each radiance component at 2.5-13 µm were calculated for the body-leaving radiance and TOA radiance at different spatial resolutions and day/night conditions, as shown in Figures 7 and 8.The night scenes mainly considered the absence of solar radiation. The results show a good agreement between simulation and measurement, with the same spectral characteristics.The partially unknown parameters of the experimental environment have caused errors between the simulation and the experiment.In the reference [4], the spectral radiation intensity is accurate up to 50% after considering all uncertainties associated with the input.By contrast, the simulation accuracy in the paper has been able to achieve a relatively good result with the condition of unknown input parameter uncertainty. Evaluation of the Aircraft and Background Contribution to the Observed Radiance It is widely recognized that background radiation affects the characteristics of aircraft space-based observations, especially at low spatial resolutions where the target features are not obvious mixed with the background radiation.However, it still lacks a quantitative analysis of background and aircraft contributions to observed radiation gathered by space-based infrared sensors.The contributions of each radiance component at 2.5-13 μm were calculated for the body-leaving radiance and TOA radiance at different spatial resolutions and day/night conditions, as shown in Figures 7 and 8 As shown in Figure 7a,c, the contribution of skin-reflected radiance to the body-leaving radiance at 2.5-3 µm is up to 98%, while the plume radiance occupies a smaller proportion but is still higher than the background radiance.Moreover, the plume and skin-reflected radiance also dominate near 4.3 µm, as the atmosphere has a strong absorption effect at these two spectral ranges, resulting in the lower background radiation energy.At all other spectrum bands, the background radiation makes up a large radiative contribution, especially in the long-wave infrared band, where it accounts for more than 90% and up to 99%.As shown in Figure 7d-f, the contribution of background radiation shows the same trend between daytime and nighttime.Still, the contribution of skin-reflected radiance at nighttime could be negligible due to the absence of solar radiation. As seen in Figure 8, the TOA radiance increases the atmosphere path radiance from the aircraft altitude to the sensor.Atmosphere path radiation is an important component around 4.3, 6, and 9.5 µm, especially at 4.3 µm, where it accounts for up to 100%.Figures 7a-c and 8a-c show that the contribution of plume radiation is reduced compared to that of body-leaving radiance.H 2 O and CO 2 are both the main sources of thermal radiation in the plume and the main species of atmospheric attenuation.Therefore, the plume radiation energy is easily attenuated by the atmosphere, which makes it hard to gather signals of the plume from space-based sensors.The skin-reflected radiation is the largest variable in the TOA radiance between daytime and nighttime, which is reduced by the absence of solar radiation, similar to the diurnal variation in the body-leaving radiance.As shown in Figure 7a,c, the contribution of skin-reflected radiance to the body-leaving radiance at 2.5-3 μm is up to 98%, while the plume radiance occupies a smaller proportion but is still higher than the background radiance.Moreover, the plume and skinreflected radiance also dominate near 4.3 μm, as the atmosphere has a strong absorption effect at these two spectral ranges, resulting in the lower background radiation energy.At all other spectrum bands, the background radiation makes up a large radiative contribution, especially in the long-wave infrared band, where it accounts for more than 90% and up to 99%.As shown in Figure 7d-f, the contribution of background radiation shows the same trend between daytime and nighttime.Still, the contribution of skin-reflected radiance at nighttime could be negligible due to the absence of solar radiation. As seen in Figure 8, the TOA radiance increases the atmosphere path radiance from the aircraft altitude to the sensor.Atmosphere path radiation is an important component around 4.3, 6, and 9.5 μm, especially at 4.3 μm, where it accounts for up to 100%.Figure 7a-c and Figure 8a-c show that the contribution of plume radiation is reduced compared to that of body-leaving radiance.H2O and CO2 are both the main sources of thermal radiation in the plume and the main species of atmospheric attenuation.Therefore, the plume radiation energy is easily attenuated by the atmosphere, which makes it hard to gather signals of the plume from space-based sensors.The skin-reflected radiation is the largest variable in the TOA radiance between daytime and nighttime, which is reduced by the absence of solar radiation, similar to the diurnal variation in the body-leaving radiance. Jointly, the blue areas of Figures 7 and 8 illustrate that the background radiation has an extremely high radiative contribution overall and increases with GSD.The skin emission radiance is mainly concentrated at 5-13 μm, and its relative contribution decreases (d-f) are the contributions plots in the nighttime; the blue, red, yellow, purple, and green areas represent the relative contribution of the background radiance, plume radiance, skin reflected radiance, skin emission radiance, and atmosphere path radiance respectively. Jointly, the blue areas of Figures 7 and 8 illustrate that the background radiation has an extremely high radiative contribution overall and increases with GSD.The skin emission radiance is mainly concentrated at 5-13 µm, and its relative contribution decreases with decreasing spatial resolution, from 5% to 1%.The yellow areas in Figures 7 and 8 together illustrate the importance of skin-reflected radiance, which accounts for a high proportion of both the body-leaving and TOA radiance in the daytime.As shown in Figure 9, it is inconsistent with the real relative relationship (Table 3) that the aircraft observed radiance without the skin reflected radiation in the B7 band is lower than the background radiance.This phenomenon highlights the importance and necessity of considering skin-reflected radiance in the simulation model.The diurnal variation of the skin-reflected radiance is also noteworthy. Figure 10 indicates a significant contrast difference between daytime and nighttime, particularly around 2.5-4.15µm, where the contrast is most significantly reduced and even negative at nighttime.Meanwhile, there is a consistently high level of contrast at 2.7 µm compared with other spectral bands, despite the reduced contrast in the nighttime. Remote Sens. 2023, 15, x FOR PEER REVIEW 17 of 25 with decreasing spatial resolution, from 5% to 1%.The yellow areas in Figures 7 and 8 together illustrate the importance of skin-reflected radiance, which accounts for a high proportion of both the body-leaving and TOA radiance in the daytime.As shown in Figure 9, it is inconsistent with the real relative relationship (Table 3) that the aircraft observed radiance without the skin reflected radiation in the B7 band is lower than the background radiance.This phenomenon highlights the importance and necessity of considering skin-reflected radiance in the simulation model.The diurnal variation of the skin-reflected radiance is also noteworthy. Figure 10 indicates a significant contrast difference between daytime and nighttime, particularly around 2.5-4.15μm, where the contrast is most significantly reduced and even negative at nighttime.Meanwhile, there is a consistently high level of contrast at 2.7 μm compared with other spectral bands, despite the reduced contrast in the nighttime. Analysis of the Effect of Instrument Performance on Target-Background Contrast The instrument performance is an important factor affecting the aircraft observ characteristics.For space-based observations with long observation distances, unid fied objects, and unpredictable instrument performance in orbit, it is required to exa the effect of instrument performance parameters on the relative differences betwee target and background.The impacts of instrument performance parameters such as MTF, SRF, and NEΔT on the T-B contrast were analyzed according to imaging tim mosphere, and aircraft parameters in the space-based simulation validation session. Analysis of the Effect of Instrument Performance on Target-Background Contrast The instrument performance is an important factor affecting the aircraft observation characteristics.For space-based observations with long observation distances, unidentified objects, and unpredictable instrument performance in orbit, it is required to examine the effect of instrument performance parameters on the relative differences between the target and background.The impacts of instrument performance parameters such as GSD, MTF, SRF, and NE∆T on the T-B contrast were analyzed according to imaging time, atmosphere, and aircraft parameters in the space-based simulation validation session. With regard to aircraft space-based observation characteristics, spatial degradation is the most intuitive impact factor.Figure 11 shows the T-B contrast spectra at different GSDs of 70-400 m.The spectral contrasts were shown in three segments due to the large difference between the different spectral ranges.The contrast becomes smaller in absolute terms as GSD increases, with a 97% reduction from 70 to 400 m.Because the lower the spatial resolution is, the higher the background radiance contribution is in the aircraft pixel, which makes the aircraft observed radiance closer to the background radiance.The contrast curves for different MTFs at the GSD of 120 m were presented in Figure 12, demonstrating that T-B relative differences increase with increasing MTF.The results in Figures 11 and 12 show consistently high contrast around 2.7 μm and generally higher contrast at 2.5-3.5 μm than that at 3.5-13 μm.Therefore, 2.7 μm was used in this paper as a general reference for 2.5-3.5 μm.Within 3.5-13 μm, contrast peaks occur around 4.2 μm, 4.4 μm and 5.7 μm.According to Figure 8a-c, it can be found that the skinreflected radiation and the plume radiation at 2.7, 4.2, and 4.4 μm play a dominant role in making the contrast higher.In comparison, the contrast of 5.7 μm is smaller due to the small contribution of aircraft radiation.Besides, it should be noted that 72% of contrasts are negative at 2.5-13 μm, indicating that the aircraft pixel is darker than the pure background. Apart from spatial degradation, the instrument's spectral response is a non-negligible influence.The T-B contrast and the SRF for VIMS B7-12 at the GSD of 120 m are given in Figure 13.The B7 of VIMS has a central wavelength of 3.68 μm and an FWHM of 0.35 μm.The SRF of B7 does not cover the high contrast region of 2.7 μm, so the T-B contrast The results in Figures 11 and 12 show consistently high contrast around 2.7 µm and generally higher contrast at 2.5-3.5 µm than that at 3.5-13 µm.Therefore, 2.7 µm was used in this paper as a general reference for 2.5-3.5 µm.Within 3.5-13 µm, contrast peaks occur around 4.2 µm, 4.4 µm and 5.7 µm.According to Figure 8a-c, it can be found that the skin-reflected radiation and the plume radiation at 2.7, 4.2, and 4.4 µm play a dominant role in making the contrast higher.In comparison, the contrast of 5.7 µm is smaller due to the small contribution of aircraft radiation.Besides, it should be noted that 72% of contrasts are negative at 2.5-13 µm, indicating that the aircraft pixel is darker than the pure background. Apart from spatial degradation, the instrument's spectral response is a non-negligible influence.The T-B contrast and the SRF for VIMS B7-12 at the GSD of 120 m are given in Figure 13.The B7 of VIMS has a central wavelength of 3.68 µm and an FWHM of 0.35 µm.The SRF of B7 does not cover the high contrast region of 2.7 µm, so the T-B contrast observed is not high.Meanwhile, the contrast in the spectral response region of the B8-12 is negative, so the aircraft pixel in these bands is darker than the background.observed is not high.Meanwhile, the contrast in the spectral response region of the B8-12 is negative, so the aircraft pixel in these bands is darker than the background.The above discussion illustrates the importance of designing/selecting the appropriate spectral response region for satellite instruments aiming at aerial target detection.The effect of SRF was further analyzed by calculating the T-B contrast for each spectral band at central wavelengths of 2.7, 4.2, 4.4, and 5.7 μm and FWHMs of 0.02, 0.04, 0.08, and 0.16 μm.To account for the impact of spectral calibration error, a random variable with a Gaussian distribution and a standard deviation (STD) of 1 nm was added to the center wavelength.T-B contrast at different central wavelengths and its variation ranges were calculated by several iterations, as shown in Figure 14.The above discussion illustrates the importance of designing/selecting the appropriate spectral response region for satellite instruments aiming at aerial target detection.The effect of SRF was further analyzed by calculating the T-B contrast for each spectral band at central wavelengths of 2.7, 4.2, 4.4, and 5.7 µm and FWHMs of 0.02, 0.04, 0.08, and 0.16 µm. To account for the impact of spectral calibration error, a random variable with a Gaussian distribution and a standard deviation (STD) of 1 nm was added to the center wavelength.T-B contrast at different central wavelengths and its variation ranges were calculated by several iterations, as shown in Figure 14. effect of SRF was further analyzed by calculating the T-B contrast for each spectral band at central wavelengths of 2.7, 4.2, 4.4, and 5.7 μm and FWHMs of 0.02, 0.04, 0.08, and 0.16 μm.To account for the impact of spectral calibration error, a random variable with a Gaussian distribution and a standard deviation (STD) of 1 nm was added to the center wavelength.T-B contrast at different central wavelengths and its variation ranges were calculated by several iterations, as shown in Figure 14.The contrast is constantly the highest at the central wavelength of 2.7 μm, as shown in Figure 14.The contrast varies inconsistently with the FWHM, indicating that the optimal FWHM for each spectral band is relatively independent.Furthermore, contrast variations caused by central wavelength shifts should be of attention.According to the error bar length, the 2.7 μm contrast variation range is greater and more sensitive to spectral calibration errors.The error bars of the 2.7 μm are equidistant in length between the positive and negative axes, suggesting that the central wavelength could be further optimized to improve the contrast, with the 5.7 μm position showing a similar phenomenon.It is also found that the length of the error bars decreases with the increase of FWHM, indicating that the wider the spectral response range is, the less the contrast is affected by the central wavelength shift.However, an excessively wide spectral response range also The contrast is constantly the highest at the central wavelength of 2.7 µm, as shown in Figure 14.The contrast varies inconsistently with the FWHM, indicating that the optimal FWHM for each spectral band is relatively independent.Furthermore, contrast variations caused by central wavelength shifts should be of attention.According to the error bar length, the 2.7 µm contrast variation range is greater and more sensitive to spectral calibration errors.The error bars of the 2.7 µm are equidistant in length between the positive and negative axes, suggesting that the central wavelength could be further optimized to improve the contrast, with the 5.7 µm position showing a similar phenomenon.It is also found that the length of the error bars decreases with the increase of FWHM, indicating that the wider the spectral response range is, the less the contrast is affected by the central wavelength shift.However, an excessively wide spectral response range also reduces the contrast.It is thus essential that the appropriate spectral response is selected while properly controlling the spectral calibration error. Infrared radiation is less energetic, and remote sensing data is more susceptible to instrument noise compared with the visible.It is often expected that the radiance difference between the target and background exceeds the NER, ensuring that the noise does not obscure the target signal.Therefore, this paper compared the T-B radiance difference with the NER of VIMS and ASTER instruments [29,48] and analyzed the effect of noise on contrast.Figure 15 shows the T-B radiance difference at the GSD of 120 m and the corresponding NER at the NE∆T of 0.15K@300K and<EMAIL_ADDRESS>results show that the T-B radiance difference at 4.25 µm, 4.57 µm, and 5-7 µm is less than NER with more susceptibility to noise.reduces the contrast.It is thus essential that the appropriate spectral response is selected while properly controlling the spectral calibration error.Infrared radiation is less energetic, and remote sensing data is more susceptible to instrument noise compared with the visible.It is often expected that the radiance difference between the target and background exceeds the NER, ensuring that the noise does not obscure the target signal.Therefore, this paper compared the T-B radiance difference with the NER of VIMS and ASTER instruments [29,48] and analyzed the effect of noise on contrast.Figure 15 shows the T-B radiance difference at the GSD of 120 m and the corresponding NER at the NEΔT of 0.15K@300K and<EMAIL_ADDRESS>results show that the T-B radiance difference at 4.25 μm, 4.57 μm, and 5-7 μm is less than NER with more susceptibility to noise.The noise was added in the aircraft observed radiation, assuming that the background radiance is the result of averaging over a uniform scene and is not affected by noise.Figure 16 shows the range of variation in contrast and its standard deviation (STD).The results indicate that the standard deviation is greatest, around 2.7 μm, but its effect is The noise was added in the aircraft observed radiation, assuming that the background radiance is the result of averaging over a uniform scene and is not affected by noise.Figure 16 shows the range of variation in contrast and its standard deviation (STD).The results indicate that the standard deviation is greatest, around 2.7 µm, but its effect is almost negligible due to the large contrast.The range of contrast variation around 4. The noise was added in the aircraft observed radiation, assuming that the b ground radiance is the result of averaging over a uniform scene and is not affecte noise.Figure 16 shows the range of variation in contrast and its standard deviation (S The results indicate that the standard deviation is greatest, around 2.7 μm, but its eff almost negligible due to the large contrast.The range of contrast variation around μm and 6μm covers the zero axis, indicating a change in the T-B relative relation (light-dark relationships).It is obvious that a change in the T-B light-dark relationsh not expected, which seriously affects the distinction between the target and backgrou Discussions The validation and evaluation results illustrate that the proposed model can generate accurate simulation data consistent with the measured data.However, the spectral window and challenges for aircraft detection are still open for further discussion. Space-Based Infrared Detection Spectral Window for Aircraft In this paper, aircraft and background radiation contributions are analyzed quantitatively, with a focus on the 2.7 µm, 4.2 µm, 4.4 µm, and 5.7 µm bands as influenced by instrument performance.As seen from Section 3.2, aircraft skin reflections and plume radiance play an important radiance contribution in these spectral bands.This phenomenon indicates that the target-background contrast is higher in the spectral ranges where aircraft radiation makes an important contribution.Because of this, the contrast is consistently higher at 2.5-3.5 µm, where aircraft radiation can contribute up to 98%. The stronger atmospheric attenuation for the background, in addition to the difference in radiation between the aircraft and the background itself, is an important cause of the higher contrast at 2.7 µm.The atmospheric transmittance from a 0 to 12 km altitude to the TOA and atmospheric profile of CO 2 and H 2 O were given in Figure 17.The whole atmosphere transmittance is almost zero around 2.7, 4.3, and 5-8 µm, which limits the Earth's surface radiation (only path radiance remained) observed by satellite remote sensing.It is the lower atmospheric transmittance and the path radiation at 2.7 µm that results in lower background radiation, meaning that only very small aircraft radiation is required to create high contrast.As seen from Figure 17b, the main gas molecules of atmospheric attenuation are distributed at a 0-10 km altitude, which objectively enables some nonatmospheric windows to be detection spectral windows for aerial targets in the air. atmosphere transmittance is almost zero around 2.7, 4.3, and 5-8 μm, which limits the Earth's surface radiation (only path radiance remained) observed by satellite remote sensing.It is the lower atmospheric transmittance and the path radiation at 2.7 μm that results in lower background radiation, meaning that only very small aircraft radiation is required to create high contrast.As seen from Figure 17b, the main gas molecules of atmospheric attenuation are distributed at a 0-10 km altitude, which objectively enables some nonatmospheric windows to be detection spectral windows for aerial targets in the air.The degree of T-B contrast affected by instrument performance varies across the characteristic spectral bands.Undeniably, the T-B contrast around 2.7 μm remains consistently high compared to other spectral bands.From Figures 14 and 16, the contrast at 4.2 μm is more sensitive to FWHM and spectral calibration accuracy, while the contrast at 4.4 μm is more susceptible to instrument noise.Moreover, 5.7 μm does not have a significant advantage compared to other spectral bands. The Challenge of Space-Based Infrared Detection of Aircraft In previous research, aircraft detection could make use of spatial [49], spectral [50], and motion information [2,51] from the aircraft.However, the low energy and long range of infrared remote sensing limit spatial and spectral information applications in detection.Some researchers [23,24,27] used SNR and CSNR to select the feature spectrum bands, which have the potential to detect and track aircraft based on motion characteristics [52,53].This paper uses radiance contribution analysis to explain why these spectral bands were selected.It then remains doubtful whether aircraft identification can be achieved The degree of T-B contrast affected by instrument performance varies across the characteristic spectral bands.Undeniably, the T-B contrast around 2.7 µm remains consistently high compared to other spectral bands.From Figures 14 and 16, the contrast at 4.2 µm is more sensitive to FWHM and spectral calibration accuracy, while the contrast at 4.4 µm is more susceptible to instrument noise.Moreover, 5.7 µm does not have a significant advantage compared to other spectral bands. The Challenge of Space-Based Infrared Detection of Aircraft In previous research, aircraft detection could make use of spatial [49], spectral [50], and motion information [2,51] from the aircraft.However, the low energy and long range of infrared remote sensing limit spatial and spectral information applications in detection.Some researchers [23,24,27] used SNR and CSNR to select the feature spectrum bands, which have the potential to detect and track aircraft based on motion characteristics [52,53].This paper uses radiance contribution analysis to explain why these spectral bands were selected.It then remains doubtful whether aircraft identification can be achieved using a single band.Therefore, the method of aircraft identification for space-based infrared observation is still a challenge.The primary question is what information about the aircraft is used to achieve detection or identification, which affects instrument design and algorithm development. Both traditional and artificial intelligence algorithms require a certain amount of measured data to achieve feature analysis and training.However, the lack of space-based infrared measurements has limited the study of aircraft detection algorithms.The publicly available infrared datasets are mostly derived from ground-based measurements, and the target size in the images is not representative of space-based observations.It is costly to collect a large-scale dataset with accurate pixel-level annotations [54].The simulation model presented in this paper has the potential to provide space-based infrared data containing aircraft for feature analysis and network training. Conclusions In this paper, a simulation model of space-based observed aircraft infrared characteristics is established, coupling target, background radiation, and instrument performance effects.The accuracy of the simulation model was validated by comparing the model predictions with data from space-based and ground-based measurements.Validation results reveal that the measured radiance and T-B relationship agree with the established model.It is also found that the model of space-based observed aircraft infrared characteristics is more dependent on background simulation accuracy.To further understand the causes of the aircraft observed characteristics, the contributions of background radiation, skin reflected/emission radiation, atmosphere path, and plume radiation were evaluated.The evaluation of radiance components indicates that background radiance plays a major role overall, while the observed radiance at 2.5-3 µm is mainly from skin reflection and plume radiance.The lack of skin-reflected radiance decreases the model reliability, and its reduc-tion at nighttime reduces the T-B contrast.The effect of instrument performance parameters on space-based infrared detection was analyzed, and the different changes in contrast on detection windows of 2.7, 4.2, 4.4, and 5.7 µm were highlighted.The results show a consistently high level of contrast at 2.7 µm compared with other spectral bands, although it is susceptible to diurnal variations.The discussions denote that the target-background contrast is higher in the spectral ranges where aircraft radiation makes an important contribution.The difference in T-B self-radiation and the stronger atmospheric attenuation for background contribute to the higher contrast at 2.7 µm. The model proposed in this paper can provide data for space-based infrared detection algorithm development and onboard instrument performance evaluation.The analysis and discussion based on this model provide insight into the causes of target observation characteristics and the effect of instrument performance on the T-B relative difference. and path radiance from the aircraft altitude to the TOA, respectively; L Tar and L H Bkg are aircraft and background equivalent radiance at flight altitude H, respectively; S Tar and S Bkg are the projected area of the aircraft and background in the viewing direction, respectively; d is the ground sampling distance, assuming that d 2 = S Tar + S Bkg . of the skin emission, reflected radiance, plume radiance and background radiance reaching the sensor, respectively. Figure 1 . Figure 1.Diagrammatic sketch of aircraft infrared observation using space-based sensors.Schematic CO2 and H2O column density diagrams with altitude are in the upper left corner. Figure 1 . Figure 1.Diagrammatic sketch of aircraft infrared observation using space-based sensors.Schematic CO 2 and H 2 O column density diagrams with altitude are in the upper left corner. Figure 2 . Figure 2. The flow chart of evaluation and analysis in this study. Figure 2 . Figure 2. The flow chart of evaluation and analysis in this study. Figure 3 . Figure 3. Observation line of sight uniform division schematic.lΔ is the length of a slab; N is the total number of sight lines intersecting with the plume; μ and ν are the elevation and azimuth angles relative to the aircraft; P, T and X represent the gas pressure, temperature and species content respectively; n represents the number of stratified layers; d Δ is the spatial sampling interval of LOS; n BB L is the blackbody radiance of the n-th slab; n τ denotes the transmissivity of the n-th slab. − intensity of the ith spectral line at a given wavenumber when the temperature is T ; is the line shape function of the ith spectral line, usually using the Voigt line function, and 0i η is the central wave number of the ith spectral line. Figure 3 . Figure 3. Observation line of sight uniform division schematic.∆l is the length of a slab; N is the total number of sight lines intersecting with the plume; µ and ν are the elevation and azimuth angles relative to the aircraft; P, T and X represent the gas pressure, temperature and species content respectively; n represents the number of stratified layers; ∆d is the spatial sampling interval of LOS; L n BB is the blackbody radiance of the n-th slab; τ n denotes the transmissivity of the n-th slab. Figure 4 . Figure 4. Auxiliary data for simulation, (a) aircraft position (red circle), the green box is selected cloud background area; (b) spectral response functions of B7-B12 and sea surface reflectance; (c) aircraft position and pixel aggregation information for B7-12 images. Figure 4 . Figure 4. Auxiliary data for simulation, (a) aircraft position (red circle), the green box is selected cloud background area; (b) spectral response functions of B7-B12 and sea surface reflectance; (c) aircraft position and pixel aggregation information for B7-12 images. Figure 5 . Figure 5.The comparison of the background and aircraft radiance spectra, (a,b) is the comparison of simulated and measured restored radiance, and (c,d) is the comparison of target and background radiance curves. Figure 5 . Figure 5.The comparison of the background and aircraft radiance spectra, (a,b) is the comparison of simulated and measured restored radiance, and (c,d) is the comparison of target and background radiance curves. Figure 6 . Figure 6.Comparison of plume measurement and simulation and atmospheric transmittance at 20 m horizontal path. Figure 6 . Figure 6.Comparison of plume measurement and simulation and atmospheric transmittance at 20 m horizontal path. Figure 7 . Figure 7. Relative contributions for the body-leaving radiance at 2.5-13 µm.(a-c) are the contribution plots of three spatial resolutions in the daytime; (d-f) are the contribution plots in the nighttime; the blue, red, yellow, and purple areas represent the relative contribution of the background radiance, plume radiance, skin reflected radiance and skin emission radiance, respectively. 25 Figure 7 . Figure 7. Relative contributions for the body-leaving radiance at 2.5-13 μm.(a-c) are the contribution plots of three spatial resolutions in the daytime; (d-f) are the contribution plots in the nighttime; the blue, red, yellow, and purple areas represent the relative contribution of the background radiance, plume radiance, skin reflected radiance and skin emission radiance, respectively. Figure 8 . Figure 8. Relative contributions for the TOA radiance at 2.5-13 μm.(a-c) are the contribution plots of three spatial resolutions in the daytime;(d-f) are the contributions plots in the nighttime; the blue, red, yellow, purple, and green areas represent the relative contribution of the background radiance, plume radiance, skin reflected radiance, skin emission radiance, and atmosphere path radiance respectively. Figure 8 . Figure 8. Relative contributions for the TOA radiance at 2.5-13 µm.(a-c) are the contribution plots of three spatial resolutions in the daytime;(d-f) are the contributions plots in the nighttime; the blue, red, yellow, purple, and green areas represent the relative contribution of the background radiance, plume radiance, skin reflected radiance, skin emission radiance, and atmosphere path radiance respectively. Figure 9 . Figure 9. B7-12 radiance comparison of aircraft and background simulated without the skin reflected radiance.The left axis is the radiance, and the right axis is the contrast, where the negative number means that the target radiance is lower than the background radiance. Figure 9 . Figure 9. B7-12 radiance comparison of aircraft and background simulated without the skin reflected radiance.The left axis is the radiance, and the right axis is the contrast, where the negative number means that the target radiance is lower than the background radiance. Figure 9 . Figure 9. B7-12 radiance comparison of aircraft and background simulated without the sk flected radiance.The left axis is the radiance, and the right axis is the contrast, where the neg number means that the target radiance is lower than the background radiance. Figure 10 . Figure 10.Comparison of target-background contrast ratio in daytime and nighttime. Figure 10 . Figure 10.Comparison of target-background contrast ratio in daytime and nighttime. ote Sens. 2023, 15, x FOR PEER REVIEW 18 ofWith regard to aircraft space-based observation characteristics, spatial degradati is the most intuitive impact factor.Figure11shows the T-B contrast spectra at differ GSDs of 70-400 m.The spectral contrasts were shown in three segments due to the lar difference between the different spectral ranges.The contrast becomes smaller in absol terms as GSD increases, with a 97% reduction from 70 to 400 m.Because the lower spatial resolution is, the higher the background radiance contribution is in the aircr pixel, which makes the aircraft observed radiance closer to the background radiance.T contrast curves for different MTFs at the GSD of 120 m were presented in Figuredemonstratingthat T-B relative differences increase with increasing MTF. Figure 11 . Figure 11.Spectral contrast of target and background at different GSDs of 70-400 m. Figure 11 . Figure 11.Spectral contrast of target and background at different GSDs of 70-400 m. Figure 11 . Figure 11.Spectral contrast of target and background at different GSDs of 70-400 m. Figure 12 . Figure 12.Spectral contrast of target and background at different MTFs of 0.1-0.3. Figure 12 . Figure 12.Spectral contrast of target and background at different MTFs of 0.1-0.3. Figure 13 . Figure 13.Target-background contrast and the spectral response functions of VIMS B7-12 at the GSD of 120 m. Figure 13 . Figure 13.Target-background contrast and the spectral response functions of VIMS B7-12 at the GSD of 120 m. Figure 14 . Figure 14.Contrast and its variation range with different center wavelengths and FWHMs. Figure 14 . Figure 14.Contrast and its variation range with different center wavelengths and FWHMs. Figure 15 . Figure 15.Comparison of noise equivalent radiance and radiance difference between target and background. Figure 15 . Figure 15.Comparison of noise equivalent radiance and radiance difference between target and background. 25 µm and 6µm covers the zero axis, indicating a change in the T-B relative relationship (light-dark relationships).It is obvious that a change in the T-B light-dark relationship is not expected, which seriously affects the distinction between the target and background. Figure 15 . Figure 15.Comparison of noise equivalent radiance and radiance difference between targe background. Figure 16 . Figure 16.The variation range of target background contrast and its standard deviation curve sidering the influence of noise. Figure 16 . Figure 16.The variation range of target background contrast and its standard deviation curve, considering the influence of noise. Figure 17 . Figure 17.Atmospheric transmittance from different altitudes to the top of the atmosphere and atmospheric profile of CO2, H2O, derived from output files of the MODTRAN. Figure 17 . Figure 17.Atmospheric transmittance from different altitudes to the top of the atmosphere and atmospheric profile of CO 2 , H 2 O, derived from output files of the MODTRAN. ( S Skin (L E_skin + L R_skin )ds + S NP (L E_plume + τ Plume L Nozzle )ds + S Plume −S NP (L E_plume + τ Plume L H Bkg )ds + S Bkg L H Bkg ds) + L H→TOA TOA Bkg are the radiance of the skin emission, reflected radiance, plume radiance and background radiance reaching the sensor, respectively. pth   ⊗ R sensor + L noise TOA Plume and L Table 1 . Imaging information and instrument performance parameters of GF-5 VIMS..The VIMS provides images in 12 bands from the visible to thermal infrared, six of which were used for validation in this study.The central latitude and longitude of the chosen data are 32.212895°N and 126.392002°E, located in the East China Sea and imaged on 25 June 2019, with the specific parameters shown in Table1. ment Table 1 . Imaging information and instrument performance parameters of GF-5 VIMS. Table 3 . Simulation accuracy, target-background contrast calculation results.
16,137.8
2023-01-16T00:00:00.000
[ "Environmental Science", "Physics" ]
Photonic quantum data locking 1 Quantum data locking is a quantum phenomenon that allows us to encrypt a long message with a small secret key with information-theoretic security. This is in sharp contrast with classical information theory where, according to Shannon, the secret key needs to be at least as long as the message. Here we explore photonic architectures for quantum data locking, where information is encoded in multi-photon states and processed using multi-mode linear optics and photo-detection, with the goal of extending an initial secret key into a longer one. The secret key consumption depends on the number of modes and photons employed. In the no-collision limit, where the likelihood of photon bunching is suppressed, the key consumption is shown to be logarithmic in the dimensions of the system. Our protocol can be viewed as an application of the physics of Boson Sampling to quantum cryptography. Experimental realisa-tions are challenging but feasible with state-of-the-art technology, as techniques recently used to demonstrate Boson Sampling can be adapted to our scheme (e.g., Phys. Rev. Lett. 123 , 250503, 2019). Introduction In classical information theory, a celebrated result of Shannon states that a message of N bits can only be encrypted using a secret key of at least N bits [1]. This result, which lays the foundation of the security of the one-time pad, does not necessarily apply when information is encoded into a quantum state of matter or light. The phenomenon of Quantum Data Locking (QDL), first discovered by DiVincenzo et al. [2], shows that a message of N bits, when encoded into a quantum system, can be encrypted with a secret key of k N bits. QDL guarantees information-theoretic security 2. To send information to Bob, first Alice uses a secret key of log K bits to choose one particular unitary transformation, i.e., one particular basis in the agreed set of K bases. 3. Alice selects M basis vectors, {U k |j x } x=1,...,M from the chosen basis and use them as a code to send log M bits of classical information through the quantum channel. This encoding of classical information into a quantum system A is described by the classical-quantum state where X is the classical variable encoded by Alice, which is represented by a set of M orthogonal vectors {|x } x=1,...,M in a dummy quantum system. In this work we assume that different code words have equal probability. As the goal of the protocol is to extend an initial secret key into a longer one, using equally probable code words is a natural assumption. It makes the analysis of the QDL protocol easier, although it can be relaxed [21,22]. The code words prepared by Alice are then sent to Bob through a quantum channel described as a completely positive and trace preserving map N A→B that transforms Alice's system A into Bob's system B. The channel maps the state in Eq. (1) into We ask a QDL protocol to have the properties of correctness and security. Correctness. The property of correctness requires that, if Bob knows the secret key used by Alice to chose the code words, then he is able to decode reliably. For example, if the channel is noiseless, then N is the identity map and In this case, Bob can simply apply the inverse unitary, U −1 k , followed by a measurement in the computational basis. In this way, Bob can decode with no error for any M ≤ d. If the channel is noisy, Alice and Bob can still communicate reliably at a certain rate of r < log d bits per channel use. This is possible by using error correction at any rate below the channel capacity, r max = I(X; Y |K) [23]. Here I(X; Y |K) denotes the mutual information between the input variable X and the output of Bob's measurement Y , given the shared secret key K. Notice that here we need classical error correction and not quantum error correction, as the goal of Alice and Bob is to exchange classical information and not quantum information. Furthermore, we apply post facto error correction, as it is commonly done in quantum key distribution [24], in which error correcting information is sent independently on a classical authenticated public channel. We emphasize the importance of the assumption that the adversary has no quantum memory for the security of post facto error correction. This assumption guarantees that a potential eavesdropper has already measured their share of the quantum system when the error correction information is exchanged on a public channel. If b bits of error correcting information are communicated on a public channel, then the eavesdropper cannot learn more than b bits of information about the message 2 . If instead the eavesdropper has a quantum memory with storage time τ , then Alice and Bob need to wait for a time larger than τ after the quantum signal have been transmitted and before proceeding with post facto error correction. In this work we assume that Alice and Bob know an upper bound on τ . Security. The property of security requires that, if Bob does not know the secret key, he can obtain no more than a negligibly small amount of information about Alice's input variable X. To clarify this, consider that, if Bob does not know the secret key used by Alice, then his description of the classical quantum state is the average of Eq. (2), In QDL, the security is quantified using the accessible information [2,6] (or similar quantities [7,21,25]). Recall that the accessible information I acc (X; B) σ is defined as the maximum information that Bob can obtain about X by measuring his share of the state σ, that is, where the optimization is over the measurement maps M B→Y on system B, and I(X; Y ) is the mutual information between X and the outcome Y of the measurement. The security 2 To see that the public channel for error correction does not render the protocol insecure, we note that Eve's additional information about the secret key is bounded by classical information theory as follows. Let X be the message sent by Alice, Z the output of Eve's measurement, and I(X; Z) the mutual information. After error correction, Eve obtains a bit string C(X). Hence, we need to consider the mutual information I(X; ZC(X)). It follows from the property of incremental proportionality [2] of the mutual information that I(X; ZC(X)) ≤ I(X; Z) + H(C(X)), where H(C(X)) is the entropy of C(X). This implies that, knowing C(X) after she measured the quantum system, Eve cannot learn more than H(C(X)) bits about the message X. property can be defined in different ways, depending on how the state σ is chosen. Here we consider a strong notion of QDL [3] and put This is equivalent to saying that the information remains encrypted even if Bob is capable of accessing the quantum resource directly without the mediation of a noisy channel. The data processing inequality [23] then implies that the protocol is secure for noisy channels too. In conclusion, we say that the protocol is secure if I acc (X; B) = O( log M ), with arbitrarily small. This means that only a negligible fraction of the information can be obtained by measuring the quantum state without having knowledge of the secret key. Intuitively, we expect that the larger K, the smaller the accessible information. This intuition has been proven true using tools from large deviation theory and coding theory [4,6,7]. The mathematical characterization of a QDL protocol consists in obtaining, for given > 0, an estimate of the minimum integer K such that there exist choices of K = K bases that guarantee I acc (X; Y ) = O( log M ). Finally, the net secret key rate that can be established between Alice and Bob, through a noisy communication channel N , is where β ∈ (0, 1) is the efficiency of error correction, and we have subtracted the initial amount log K of secret bits shared between Alice and Bob. We emphasise that the mutual information I(X; Y |K) depends on the particular noisy channel, whilst log K is universal. The noisier the channel, the smaller I(X; Y |K), which accounts for the error correction overhead. The factor β accounts for the fact that practical error correction requires more overhead than expected in theory. Multiphoton encoding Let n photons be sent into m optical modes of an interferometer with at most one photon per input mode. The input modesâ evolve into UâU † , with U the unitary transformation describing the interferometer: A passive multi-mode interferometer realises a unitary transformation that preserves the total photon number. The set of all possible transformations that can be realised in this way defines the group of linear passive optical (LOP) unitary transformations, which is isomorphic to the m-dimensional unitary group U (m) (see e.g. Ref. [20]). By Shur's lemma, the group of LOP unitaries has irreducible representations in the subspaces with definite photon number. For applications to photonic QDL, the representation with 1 photon has been studied in previous works [3,10]. This representation has the unique feature of being the fundamental representation of U (m). However, representations with higher photon number that we are considering here are no longer the fundamental representation. The output from the interferometer prior to photo-detection can be expanded in the photon-number basis: where n = (n 1 , n 2 , . . . , n m ) denotes a photon-number configuration with n i photons in the i-th mode and λ n its amplitude. The aim of this paper is to characterize a particular family of QDL protocols, where information is encoded into m ≥ 2 optical modes using n > 1 photons. We define the code words by putting photons on different modes, with no more than one photon per mode. In this way we obtain a code book C m n that contains C = m n code words, whereas the overall Hilbert space defined by n photons on m modes has dimensions d = n+m−1 n (this includes states with more than one photon in a given mode). For example, with m = 4 modes and n = 1 photon, we have the C = 4 code words |1000 , |0100 , |0010 , |0001 . With n = 2 photons, we instead obtain the C = 6 code words |1100 , |0011 , |1001 , |0110 , |1010 , |0101 . The two users, Alice and Bob, are linked via an optical communication channel that allows Alice to send m optical modes at the time. Initially, we assume the channel is noiseless. Later we will extend to the case of a noisy channel. The goal of the protocol, which is shown schematically in Fig. 1, is for Alice and Bob to expand an initial secret key of log K bits into a longer one. For given n and m, Alice defines a code bookC m n by choosing a subset of M < C code words from C m n . The code book is publicly announced. We denote the code words as |ψ x , with x = 1, . . . , M . To encrypt these code words, Alice applies an m-mode LOP unitary transformation from a set of K elements {U k } k=1,...,K . The unitary is determined by the value of her secret key of log K bits. We recall that any LOP unitary can be realised as a network of beam splitters and phase shifters [26,27]. We can directly verify the correctness property for a noiseless communication channel. In this case, Bob, who knows the secret key, applies U −1 k and measures by photo-detection. He is then able to decrypt log M bits of information with no error. This implies that Alice and Bob can establish a key of log M bits for each round of the protocol. To characterise the secrecy of the QDL protocol, we need to identify the minimum key size K . This is the task that we accomplish in the following sections below. Preliminary considerations Before presenting our main results, we need to introduce some notation and preliminary results. First, consider the following state, which is defined by taking the average over the LOP unitary U acting on a state ψ. Here E U denotes the expectation value over the invariant measure (i.e., the Haar measure) on the group LOP unitary transformations acting on m optical modes. The choice of the invariant measure is somewhat arbitrary and other measures can be used, see e.g. Ref. [28]. In Eq. (10), ψ is a vector in the code book C m n . By symmetry,ρ B is independent of ψ. By symmetry, the stateρ B is block-diagonal in the subspaces H q , i.e., We are particularly interested in the smallest coefficient in this expansion, which can be computed numerically for given n and m. Examples are shown in Table 1. The results of our numerical estimations suggest that the minimum is always achieved for the pattern q min = (1, 1, 1, .., 0, 0), i.e., when each mode contains at most 1 photon. An analytical expression for c (1,1,1,..,0,0) is given in Ref. [29], Supported by the results of our numerical search, we formulate the following conjecture: We have used this conjecture to produce the plot in Fig. 3. If the number of modes is much larger than the number of photons squared, m n 2 1, the probability that two or more photons occupy a given mode is highly suppressed. In this limit, we have c min = n!/m 2 (see Appendix D). The other quantity we are interested in is where the maximum is over a generic n-photon vector φ, and ψ is a vector in the code book C m n . Again, because of symmetry, γ is independent of ψ. Note that γ quantifies how much the transition probability | φ|U |ψ | 2 changes when a random unitary is applied. In the regime of m n 2 1, an analytical bound can be computed and we obtain γ ≤ 2(n + 1). This is discussed in Appendix D. Results Our main result is an estimate of the minimum key size K that guarantees that the accessible information I acc (X; B) is of order . This estimate is expressed in terms of the parameters c min and γ introduced in Section 4. Proposition 1 Consider the QDL protocol described in Section 3, which encodes log M bits of information using n photons over m modes. For any , ξ ∈ (0, 1), and for any K > K , there exist choices of K linear optics unitaries such that I acc (X; B) < 2 log 1 c min , where (24) and M = ξC. Recall that d = n+m−1 n is the dimension of the Hilbert space with n photons over m modes, and C = m n is the number of states with no more than one photon per mode. The parameters γ and c min depend on the particular values of n and m. We identify three regimes for n and m: 1. For n = 1, the group of linear optical passive unitaries spans all unitaries in the subspace of n = 1 photon over m modes. The single-photon representation of the group of LOP unitaries is the fundamental representation of U (m). We then obtain γ = 2 and c min = 1/m [4,12]. 3. For generic values of n and m, to the best of our knowledge both γ and c min need to be calculated numerically. The estimation can be simplified if we assume Conjectures 1 and 2 introduced in Sec. 4. We can write Eq. (24) as where the functions f and g scale as log (1/ ). For illustration, Fig. 2 shows log M and an estimate of log K as functions of n. To obtain the plot, we have chosen m = n 3 and used the limiting values for the parameters γ = 2(n + 1) and c min = n!/m n . Note that, as is expected to be sufficiently small, this estimate for the secret key size is useful only in the limit of asymptotically large K , i.e., when one encodes information using asymptotically many modes and photons. This is certainly not the regime one is willing to test in an experimental demonstration of QDL. The QDL protocol outperforms the classical one-time pad when log M > log K , for some reasonably small value of . Some numerical examples are in Fig. 2, which show the gap between log M and log K increases with increasing number of modes and photons. For example, for n = 20, m = 8000, ξ = 0.01, and = 10 −10 , we obtain log M 192 and log K 127 < 0.7 log M . This shows explicitly that we can achieve information theoretical security with a private key shorter than the message if n and m are large enough. Scaling up the communication protocol : in a practical communication scenario, not only one signal, but a large number of signals are sent from Alice to Bob through a given quantum communication channel. Consider a train of ν 1 channel uses, where Alice encodes a classical variable X (ν) into tensor-product code words of the form where each component ψ x 1 is a state of n photons over m modes. Over ν channel uses, the total number of code words is denoted as M (ν) = ξC n , and the code rate is lim ν→∞ 1 ν log M (ν) = log C. Similarly, Alice applies local unitaries to these code words, 24)). This is obtained using γ = 2(n + 1) and c min = n!/m n , i.e., assuming the values in the limit of no-collision. The other parameters are: = 2 −n s , s = 0.5 (red dashed); s = 1 (purple dotted); = 10 −10 (green dotted dashed). If we choose the security parameter ∝ 2 −n s , then I acc → 0 as n → ∞. When the blue curve is higher than the other curves, the message is longer than the key. In this case, QDL beats the classical one-time pad and allows to expand the initial secret key of log K bits into a longer key of log M bits. for a total number of K (ν) allowed unitaries acting on ν channel uses. We denote as B ν the outputs of ν channel uses received by Bob. The security condition on the mutual information then reads I acc (X ν ; B ν ) = O( log M (ν) ). The minimum secret key consumption rate then reads Corollary 1 allows us to estimate the net secret key rate as the difference between the code rate and the secret key consumption rate, where conjecture 1 implies k = log γ + log d C . If r QDL > 0, then the QDL is successful in beating the classical one-time pad and generates a secret key at a rate of log C bits per channel use larger than the key consumption rate of k bits. We can compare these results with the classical one-time pad encryption as well as previously known QDL protocols. We consider the three parameters that characterise symmetric key encryption: the length log K of the initial secret key, the length log M of the message, and the security parameter . Classical one-time pad requires log K = log M for perfect encryption ( = 0). Therefore, the comparison with QDL makes sense in the regime where can be made arbitrary small. In this regime, we can then say that a QDL protocol beats the classical one-time pad if K M . The QDL protocol that has up to now the largest gap between K and M was proposed by Fawzi et al. in Ref. [7]. This protocol requires an initial key of constant size log K ∼ log 1/ for any sufficiently large M . This is obtained by using random unitaries in the M -dimensional Hilbert space, and therefore requires a universal quantum computer acting on a large Hilbert space. Proposition 1 shows that there exist QDL protocols with log K ∼ O(log 1/ )+log (d/M ) = O(log 1/ ) + log (d/C) + log (1/ξ). Comparing with Ref. [7], the length of the secret key has an overhead proportional to The advantage with respect to Ref. [7] is that the encryption only requires linear optical passive unitaries. For m and n large, using the Stirling approximation we obtain which becomes negligibly small in the limit of diluted photons, m n 2 1. Corollary 1 shows the existence of QDL protocols for ν channel uses where a secret key of log K ∼ ν (log γ + log d/C) allows us to encrypt log M ∼ ν log C, where → 0 in the limit that ν → ∞, and the constant γ depends on the particular choice of the parameters n and m. Note that in these protocols the secret key length log K is not constant, but scales as the message length log M . Although they have the same scaling, we can still have log M > log K in some regime. Despite being less efficient in terms of key use, the advantage of these protocols is that they only need linear optics passive unitaries acting on a small number of photons and modes, i.e., n and m can be chosen finite and small. For example, for n = 10 photons over m = 30 modes, we obtain log M 25 and log (d/M ) 4.4 < 1 5 log M . From table 4 we also obtain the numerical estimates log γ < log (111.5) 6.8 < 1 3 log M . Putting k = lim ν→∞ 1 ν log K, we obtain the following estimate for the asymptotic rate of secret key consumption, This shows explicitly that less than log M bits of secret key are used to encrypt a message of log M bits. Therefore, the net key generation rate in this case is In Section 8 we consider the effect of photon loss in terms of the net rate per mode, r QDL /m. Proof of Proposition 1 We prove the proposition using a random-coding argument. We show that a random choice of the code and of the set of scrambling unitaries leads, with high probability, to a QDL protocol that satisfies the security property. The code bookC m n of cardinality M is randomly chosen by sampling from the code book C m n of cardinality C. We put M = ξC. For ξ 1, we expect that the M code words are all distinct up to terms of second order in ξ. Therefore the M code words encode log M − O(log (1/ξ)) bits of information. The sender Alice first prepares a state |ψ x , then applies a linear optics unitary U k . The unitary is chosen among a pool of K elements according to a secret key of log K bits. We choose the pool of unitaries by drawing K unitaries i.i.d. according to the uniform Haar measure on the group U LO (m) of linear optics unitary transformations on m modes. If the receiver does not know the secret key, the state is described by the density operator Given the classical-quantum state Bob attempts to extract information from this state by applying a measurement M B→Y . Such a measurement is characterised by the POVM elements {α y |φ y φ y |} y , where φ y 's are unit vectors and α y > 0 such that y α y |φ y φ y | = I, with I the identity operator. Without loss of generality we can consider rank-one POVM only [2]. The output of this measurement is a random variable Y with probability density and conditional probability The accessible information is the maximum mutual information between X and Y : where This yields Note that the accessible information is written as the difference of two entropy-like quantities. The rationale of the proof is to show that for K large enough, and for random choices of the unitaries and of the code words, both terms in the curly brackets are arbitrarily close to for all vectors φ y , whereρ B is as in Eq. (10). This in turn implies that the accessible information can be made arbitrarily small. To show this we exploit the phenomenon of concentration towards the average of the sum of i.i.d. random variables. This concentration is quantified by concentration inequalities. We now proceed along two parallel directions. First, we apply the matrix Chernoff bound [30] to show that 1 In particular the matrix Chernoff bound implies that the inequality holds true up to a failure probability This in turn implies uniformly for all φ. The details are presented in Appendix A below. Second, we apply a tail bound from A. Maurer [31] to show that up to a failure probability The above applies uniformly to all unit vectors φ and for almost all values of x. This implies that In conclusion, we obtain that, up to a probability smaller than p 2 , The details are presented in Appendix B. Putting the above results in Eq. (45) and (49) into Eq. (41) we finally obtain Recall that p Y (y) = α y φ y |ρ B |φ y is a probability distribution. Therefore, as the average is always smaller that the maximum, we obtain where c min := min φ φ|ρ B |φ can be computed as shown in Section 4. The above bound on the accessible information is not deterministic, but the probability p 1 + p 2 that it fails can be made arbitrary small provided K is large enough (see Appendix C for details). This probability is bounded away from 1 if and The size of K critically depends on the factor γ, which determines the convergence rate of the Maurer tail bound. How to estimate this coefficient is the subject of Appendix D. Proof of Corollary 1 Consider a train of ν 1 channel uses. Alice encodes information using M (ν) code words of the form |ψ x = |ψ x 1 ⊗ |ψ x 2 ⊗ . . . |ψ xν , where each component ψ x 1 is chosen randomly and independently from the code book C m n , which has cardinality C. Each ν-fold code word is uniquely identified by the multi-index 1 is a small positive constant. First Alice encodes information across ν signal uses using the code words ψ x , then she applies local unitaries U k = U k 1 ⊗ U k 2 · · · ⊗ U kν to scramble them. The set of possible unitaries is made of K (ν) elements. These unitaries are chosen by sampling identically and independently from the Haar measure on the unitary group U LO (m) of linear optical passive unitary transformations on m modes. Note that, whereas ν is arbitrary large, the number of modes m in each signal transmission will be kept constant and relatively small. Also, the number of photons per channel use is fixed and equal to n. In conclusion, we can straightforwardly repeat the proof of Section 6 with these new parameters. This yields that, for any arbitrarily small , the bound holds with non-zero probability provided that (recall that M (ν) = ξC ν ) (62) Finally, in the limit of ν 1, and since lim ν→∞ Noisy channels A practical communication protocol needs to account for loss and noise in the communication channel. This requires us to introduce error correction in the classical post-processing. We address this issue here and show that the structure of our proof encompasses a large class of error correcting protocols. In the case of a noisy and lossy channel, Alice and Bob can still use the channel by employing error correction. Error correction comes with an overhead that reduces the maximum communication rate from log M (the maximum amount of information that can be conveyed through a noiseless channel) to I(X; Y |K) ≤ log M , where I(X; Y |K) is the mutual information given that both Alice and Bob know the secret key K. The amount of loss and noise in the communication channel can be experimentally determined with the standard tools of parameter estimation, a routine commonly employed in quantum key distribution. This in turn allows Alice and Bob to quantify I(X; Y |K). In principle, error correction allows Alice and Bob to achieve a communication rate arbitrarily close to I(X; Y |K). In practice, however, we can only partially achieve this goal. To model this fact, one usually introduce the error correction efficiency factor β ∈ (0, 1). Putting this together with Corollary 1, we obtain our estimate for the net rate of the protocol: 56)) over the classical one-time pad, in the presence of loss. A positive rate expresses the fact that the QDL protocol allows us to generate more secret bits than it consumes, hence beating the classical one-time pad encryption. The estimates of the parameters γ and c min are obtained by assuming Conjectures 1 and 2. We see that the information density per mode increases as m increases. We have chosen n to maximise the rate. The optimal value of n depends on η, and n ≈ m/3 for η ≈ 1. For moderate losses, the optimal n decreases. This suggests that QDL may be observed with high loss by increasing the number of modes. These values for the number of photons and modes are similar to those of a recent experimental demonstration of Boson Sampling [32]. where a positive net rate expresses the fact that the QDL protocol allows us to expand the initial secret key into a longer one. As an example, consider the case where Alice and Bob communicate through a lossy optical channel. The efficiency factor η ∈ (0, 1) represents the probability that a photon sent by Alice is detected by Bob, including both channel losses and detector efficiency. The mutual information I(X; Y |K) between Alice and Bob can be computed explicitly (see Appendix E for detail). We obtain : (66) Fig. 3 shows the quantity r QDL /m, i.e., the number of bits per mode, for β = 1, for a pure loss channel with transmissivity η. The plot is obtained assuming Conjectures 1 and 2. This shows that QDL can be demonstrated experimentally with loss and inefficient detectors. In particular, higher loss can be tolerated by increasing the number of optical modes. Note that the values for the number of photons and modes used to obtain this figure have been achieved experimentally in Ref. [32]. Conclusions The phenomenon of Quantum Data Locking (QDL) represents one of the most remarkable separations between classical and quantum information theory. In classical information theory, information-theoretic encryption of a string of N bits can be only made by exploiting a secret key of at least N bits. This is realised, for example, by using a one-time pad. By contrast, QDL shows that, if information is encoded into a quantum system of matter or light, it is possible to encrypt N bits of information with a secret key of k N bits. QDL is a manifestation of the uncertainty principle in quantum information theory [8,9]. Initial works on QDL have focused on abstract protocols defined in a Hilbert space of asymptotically large dimensions. More recent works have extended QDL to system of relatively small dimensions that are transmitted through many uses of a communication channel. This approach allowed to incorporate error correction and led to one of the first experimental demonstrations of QDL in an optical setup [13]. Inspired by Boson Sampling [33,34], in this work we have further extended QDL to a setup where information is encoded using multiple photons scattered across many modes, and processed using linear passive optics. The extension of QDL to multiphoton states is technically challenging due to role played by higher-order representations of the unitary group. Our protocols for multiphoton QDL has the potential to data-lock more bits per optical mode, hence can achieve a higher information density. Experimental realisations of our protocols are challenging but feasible with state-of-the-art technology. This is suggested by recent results in photon generation and advances in integrated linear optics, e.g., Ref. [32] reported interference of 20 photons across 60 modes. Several works have attempted to apply the physical insights of Boson Sampling in a quantum information framework beyond its defining problem. In this paper, we provide a protocol for quantum cryptography based on the physics of Boson Sampling. We have presented an information-theoretic proof that a linear-optical interferometer, fed with multiple photons, is useful for quantum cryptography. The security of our protocol does not rely on the classical computational complexity of Boson Sampling. Therefore it holds for any number of modes m and photon number n. The security proof is based on QDL and random coding techniques. We have shown that our protocol remains secure when we use classical error correction to protect the channel against photon loss and other errors. It is therefore a scalable and efficient protocol for quantum cryptography. A Matrix Chernoff bounds The matrix Chernoff bound states the following (this formulation can be obtained directly from Theorem 19 of Ref. [30]): Hermitian-matrix-valued random variables, with X t ∼ X, 0 ≤ X ≤ R, and c min ≤ E[X] ≤ c max . Then, for δ ≥ 0: where Pr{x} denotes the probability that the proposition x is true, and Note that for δ > 1 we have and for δ < 1 First consider the collection of M code words ψ x . We apply the Chernoff bound to the M independent random variables X x = |ψ x ψ x |. Note that these operators are defined in a C-dimensional Hilbert space. For τ > 1 we then have Consider now the collection of K random variables X k = 1 M x U k |ψ x ψ x |U † k . We assume that they are bounded by R = 1+τ C . We apply again the Chernoff bound: Thus the total probability reads Putting τ = C Kc min 2M we obtain In conclusion we have obtained that, up to a probability smaller than p 1 , B The Maurer tail bound We also need to apply the following concentration inequality due to A. Maurer [31]: ..,K be K i.i.d. non-negative real-valued random variables, with X k ∼ X and finite first and second moments, E[X], E[X 2 ] < ∞. Then, for any τ > 0 we have that For any given x and φ, we apply this bound to the random variables Note that (see Section 4) and The application of the Maurer tail bound then yields where Note that, by symmetry, γ is independent of ψ x . The calculation of γ is presented in Appendix D. B.1 Extending to almost all code words The probability bound in Eq. (80) is about one given value of x. Here we extend it to distinct values x 1 , x 2 , . . . , x : where we have used the fact that for different values of x the variables are statistically independent (recall that the code words are chosen randomly and independently). Second, we extend to all possible choices of code words. This amount to a total of M events. Applying the union bound we obtain D Estimating the factor γ The goal of this Appendix is to estimate the factor γ that determines the secret key consumption rate. The objective is therefore to evaluate the first and second moments of the random variable where φ restricted to be a vector in the single-occupancy subspace H 1 , which is our code space. A generic state can be written as We can apply the Cauchy-Schwarz inequality as shown in Section 4. This yields (see Eq. (21)): By symmetry, the quantities do depend on q but not on the particular vector φ q in the subspace H q , nor on the code word ψ. Therefore for each q, γ q can be computed numerically and in turn obtain an estimate for the upper bound on the speed of convergence γ ≤ 2 max Λ k 1 l 1 Λ k 1 l 2 Λ k 1 l 3 Λ k 2 l 1 Λ k 2 l 2 Λ k 2 l 3 Λ k 3 l 1 Λ k 3 l 2 Λ k 3 l 3    (103) The object Λ[1 i1 , 2 i2 , ..|1 j1 , 2 j2 ...] denotes a matrix whose entries are taken from the matrix Λ, and whose row index l occurs i l times, and whose column index k occurs j k times. For example (105) Using Eq. (105), we can calculate Eq. (100) for a particular photon occupancy pattern. We numerically compute γ q for different photon patterns for n between 2 and 8, examples are given in Table 2 and 3. Note that the number of configurations to search over grows exponentially with n, and thus the search becomes infeasible with high n. The calculations were performed in Python by computing the permanents of n×n submatrices of the m×m unitaries generated from the Haar measure. The expectation value is taken by averaging over ∼ 10 6 runs. We observe that the highest value of γ q is achieved when all the photons populate only one mode. To make the calculation feasible, we conjecture (Conjecture 2) that this is also true for higher n; in this case, the computation can be performed much more efficiently because the submatrices have repeated rows. This conjecture has been used to produce the plots in Fig. 3. We repeat the calculation for n = 9 to 13, where the results are shown in Table 4. We now consider the regime of m n 2 in which we can neglect photon bunching. Therefore, we compute the first and second moments of the random variable X = | ψ j |U |ψ j | 2 . (106) This is a little less general than (98) because ψ j is not a generic vector in H m n . In fact ψ j and ψ j identify two sets of modes, with labels (i 1 , i 2 , . . . i n ) and (i 1 , i 2 , . . . i n ), respectively. This corresponds to photon-counting on the modes, which as we know, maps onto n × n sub-matrix A (jj ) of the unitary matrix U : The random variable X is the modulus square of the permanent of A (jj ) : where the sum is over all the permutations π. To further explore the statistical properties of the permanent, it is useful to recall that a given entry of a random m × m unitary is itself distributed approximately as an complex Gaussian variable with zero mean and variance 1/m. If instead we consider a submatrix of size n × n the entries are with good approximation independent Gaussian variables as long as n m [33]. This means that the entries A since the non-zero terms are given by i = j, τ = σ. From Lemma 56 of Ref. [33], the fourth moment of the permanent can be computed as In conclusion, we have obtained From which it follows, γ ≤ 2(n + 1) .
9,331
2021-04-28T00:00:00.000
[ "Computer Science", "Physics" ]
Histopathological Effects of Mercury on Male Gonad and Sperm of Tropical Fish Gymnotus carapo in vitro Hg is toxic metal mostly due to adverse effects on structure and function of tissues and organs in humans and animals. Male reproductive systems of fish species are also sensitive to Hg action. However, the histological alterations in tropical fish testis are less known and little information is available concerning the underlying mechanisms of metal pathogenesis in reproductive functions. Further investigations dealing with Hg direct effects on tissue and organs of tropical species are a need. The present study investigated HgCl2 toxic effects in testes and sperms of tropical fish Gymnotus carapo. The histopathology, germ cell structure and number were analysed to elucidate the pathological process during exposure to increasing metal concentrations (1 μM 30 μM). Fishes exposed to 20μM and 30μM reached testicular Hg concentrations of 5.1 μg.g and 5.2 μg.g, respectively. No significant alterations in gonadosomatic index (GSI) occurred between control, and Hg exposed fishes. Untreated fishes showed characteristic organization of testicular tissue, with germ epithelium organized in cysts where spermatogenesis occurs. Germ cells and spermatozoa are seen an inner the cysts. HgCl2 induced severe damages characterized by complete disorganization of seminiferous lobules, proliferation of interstitial tissue, congestion of blood vessels, reduction of germ cells and sperm aggregation. Exposed fishes showed a decrease in the sperm's number. Initial reduction of a sperm's number (36,8%) was observed after 20 μM/24h treatment and subsequent decrease (48,7%) was obse rved after 20μM/96h. Hg (20μM) also altered sperm morphology in 24h and 96h where sperm head abnormalities were present. In conclusion, the present study showed HgCl2 progressive damages in testicular tissue, sperm count and morphology of tropical species Gymnotus carapo. The effects in testicular tissues were observed since low Hg concentrations. These results are important to establish a direct correlation between the mercury accumulation and severity of lesions. Introduction Mercury (Hg) is a toxic environment pollutant that induces several adverse effects in many tissues and organs of humans and animals.Male reproductive system is also sensitive to Hg (Boujbiha et al.,200).In fishes, Hg can inhibit gametogenesis, induce testicular atrophy, and impair individual reproduction.Despite some of Hginduced damages are known, there is still less information about the metal accumulation on tissue and its relationship with ultrastructure disorganization of fish testis, especially for tropical species. Sperms are also useful in assessing the Hg impacts on male reproductive system.In rodents, acute contamination decreased reproductive quality of gametes, where sperm morphology, count, motility and viability were affected.Further investigations in such parameters are needed for different fish species since they may dramatically decrease sperm performance in aquatic environment affecting fertilization success and alter fish populations. Gymnotus carapo (tuvira) is a tropical freshwater In the present study, the HgCl 2 toxic effects in testes and sperms of a teleost Gymnotus carapo were observed, elucidating the pathological process during in vitro exposure to increasing concentrations (1 µM -30 µM). Fish contamination Gymnotus carapo specimens (n=116) used in the present study were all male in same sexual maturity stage, obtained from Cima Lake, northern of Rio de Janeiro state (21º 46' S e 41º 31' W).Heavy metal's distribution in sediments and biota of Cima Lake has already been described and characterized as area with low levels of metal pollution (Ferreira et al., 2003). For each HgCl 2 contamination, 6 male adult fish were selected, four exposed to HgCl 2 concentrations (5 µM/10 µ M/ 20 µ M/ 30µM) and two kept as the control group.The fishes were exposed at different exposure times (24 h/48 h/72 h/96 h) and control fishes were always dissected in 24h and 96h of exposure.In order to increase the sampling for chemical and histological analysis the procedure was repeated consecutive times for each time/concentration tested. Contamination was achieved by intraperitoneal injection of HgCl 2 solution while the control group was injected with phosphate buffer solution.To avoid differences in treatment, all fish used for this study were of similar size (length: 32 ±1 cm/ weight: 125.8 ±17.8g).After each exposure time, the specimens were measured, weighed and dissected to obtain testis for further analysis. Hg chemical determination in the testes For mercury detection, testis samples from control and contaminated fishes followed strong acid digestion according to methodology described by Bastos et al. (1998).All the Hg determinations were performed by spectroscopy of atomic emission method using the equipment ICP-AES (Varian, Liberty II models) with cold vapor accessory (VGA-77).The method limit detection was calculated according to Skoog and Leary (1992) as being of 0.23 µg.g -1 . Analysis of testis morphology Samples of control and Hg exposed testis were fixed in 10% neutrally buffered formalin for 24 hours.The samples were then dehydrated in a progressive series of alcohol, cleared in xylene, embedded in paraffin.The samples sectioned (5 µm) and stained with hematoxylin and eosin (H&E) for examination by light microscopy.Samples of the testis (approximately 1 mm 3 ) were also fixed in formaldehyde 4%, glutaraldehyde 2.5%, cacodylate buffer 0.1 M, sucrose 5%, calcium chloride 5 mM and post-fixed (1:1) in osmium teroxide 1% and potassium ferricyanide 0.8%, dehydrated with acetone, embedded in Epon®.Semithin slices (0.4µm) were obtained using an ultramicrotome Reichercuts Leica.Slices stained with toluidine blue (1%) were observed with light microscopy. Sperm Sampling Following the damages observed in testis, Hg effects in sperm were evaluated in 20 µM concentration after 24h and 72h.Testis from control (n=4), Hg exposed to 20 µM/24h (n=2) and 20 µ M/72h (n=2) were minced with anatomic scissors in 2 mL of cacodylate buffer 0.1M (pH 7.2) for 5 minutes at room temperature.After dilution, the sperm's number was counted in hemocytometer under light microscopy using phase contrast at x400 magnification. Statistic analysis All the values are expressed as mean±SD.Significant differences were determined with Graph-prism v.4 Software (GraphPad Software, Inc. CA, USA).Two-way analysis of variance followed by Bonferroni test was performed for Hg concentration's data and one-way analysis of variance followed by Tukey test was used for sperm data.Differences were considered significant when p < 0.05. Results and Discussion External investigation was performed in each fish before and after the execution of the experiments.Control and Hg treated fishes showed to be healthy according to conditions of their gills, eyes and scales.Internal organs, as liver, kidney and especially the testis did no present macroscopic anatomic alteration. As described in figure 1, testicular tissue in untreated fishes showed characteristic organization of cysts arrangement where spermatogenesis occurs (Fig 1a).Inner the cysts, germ cells (Fig 1b, c) in different stages of differentiation are distributed as: primary (SPGI) and secondary (SPGII) spermatogonia, primary (SPCI) and secondary (SPCI) spermatocytes and spermatids (SPD).Hg chemical analysis revealed that treated fishes with 20µM and 30µM reached highest testicular concentrations of 5.1 µg.g -1 and 5.2 µg.g -1 , respectively (Table 1).Hg concentrations in control and treated with 5µm, 10 µm were bellow the detection limits of the method (Table 3).These results are in agreement with severe damages observed by histopathological analysis and indicate that even low Hg doses can induce morphological alterations in testis. The present study also enhances the knowledge about progressive accumulation of HgCl 2 and adverse effects in testicular tissue structure.In important point, this study showed that histology damages started at concentrations not detectable by the limit of accumulation method (< detection limit).These results are important to establish a direct correlation between the mercury accumulation and the severity of tissue lesions (Boujbiha et al. 2009). Hg induced severe damages in testis arrangement, affecting the germ cells that are involved in spermatogenesis process.Therefore, investigations in sperms were also performed for overall evaluation of Hg effects in male gonad. Conclusion The present study showed HgCl 2 progressive damages in testicular pathology, sperm count and morphology of tropical fish species Gymnotus carapo.These results are important to establish a direct correlation between the Hg accumulation and the severity of lesions since the testis analysis was performed since Hg concentrations bellow limit of method detection until higher doses that induced severe damages.Moreover, this work enhances the data about Hg toxicological effect in fishes from tropical regions. These cells undergo a number of cell divisions until sperm formation inner the cysts (arrowheads) (Fig 1b, c).Between the cysts interstitial tissue (it) is present being composed by Leydig cells, blood/lymphatic vessels and connective tissue (Fig 1b).Hg induced changes in treated fishes for all concentrations administrated and the effects become more severe with increase of dose/time.Hg treatment induced complete disorganization in cysts arrangement (Fig 2) with congestion of blood vessels (Fig 2b) and proliferation of interstitial tissue (Fig 2b).Severe damages were observed in higher concentrations (20 µM and 30 µM) as reduction of germ cells (Fig 2d), marked variations of cyst size (Fig 2c), interstitial and lobular disintegration (Fig 2d), sperm aggregation (Fig 2c). nd: not detected by equipment since concentrations bellow limit of method detection.The letters a, b indicate the indicate groups at means that are significant different at 5% of significance level. Fig. 3 . Fig. 3. Effect of mercury on sperm count after 20 µM treatment for different exposure times (24h and 96h).The letters a, b and c indicate that are significant different at 5% of significance level.
2,121
2013-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Analysis and Comparison about the Common Remedy of Respiratory Viruses through Data Mining Respiratory diseases have a large proportion among those various diseases. Among those, the main diseases that we are now dealing with are viruses which have no majority vaccine found: Human Rhinovirus 14 (HRV), Human Coronavirus OC43 (HCoV), Respiratory Syncytial Virus (RSV), and Human Para influenza virus 1(HVJ). Even though the body can cure most of these viruses by itself, there are some incidents which end up with death. Starting an experiment with those reasons, we separated viruses by the basic symptoms and appearances, and by using data mining, we found similarities and differences of various sequences. As a result, having a high frequency, decision tree prove that each sequences are too different from each other, but still decision tree only shows the difference of the sequences. According to apriori algorithm, it could be able to find a remedy which can block amino acid L, Leucine. Introduction Respiratory diseases have a large proportion among those various diseases.Among those, the main diseases that we are now dealing with are viruses which have no majority vaccine found: Human Rhinovirus 14 (HRV), Human Coronavirus OC43 (HCoV), Respiratory Syncytial Virus (RSV), and Human Para Influenza Virus 1 (HVJ).Even though the body can cure most of these viruses by itself, there are some incidents which end up with death.Starting an experiment with those reasons, we separated viruses by the basic symptoms and appearances, and by using data mining, we found similarities and differences of various sequences.As a result, having a high frequency, decision tree prove that each sequences are too different from each other, but still decision tree only shows the difference of the sequences.According to apriori algorithm, it could be able to find a remedy which can block amino acid L, Leucine. Coronavirus Coronavirus has a single strain RNA for genetic material and symmetric spiral nucleocapsid.It is also a virus with an envelope.Especially, SARS coronavirus among corona virus has S amino acid and hemagglutinin esterase on the envelope.These amino acids help virus to attach on the cell membrane. Coronavirus spins the vertebrates as hosts and causes various diseases.Among a variety of coronavirus, only 6 of them are known to infect human.This virus usually infects upper airway among respiratory system and gastro-intestinal tract.However, SARS coronavirus infects both upper airway and lower airway for its unique pathogenesis. Corona virus is known as a major cause of the common cold in adults which appears mostly during spring and winter.However, unfortunately, the culture in the laboratory is difficult to accurately determine the affection to cold.It can cause viral pneumonia or bacterial pneumonia if it gets serious. There is a vaccine for coronavirus that infects dogs, but for now there is no vaccine or remedy for human.Fortunately, the recent study shows that an inhibitory effect of chemical compound K22 on the proliferation of the coronavirus and the therapeutic agent is likely to be developed. Human Respiratory Syncytial Virus (RSV) RSV has a single strain RNA for genetic material it is also a virus with an envelope [1].On the envelope, there are amino acid F and G. Amino acid F causes syncytia formation by inducing the fusion of the virus and cell.Plus, amino acid G helps RSV to attach on nearby cell's membrane. Spread mainly by physical contact, and it has an incubation period about 5 days.It infects both upper and lower airway, but to adults, there is no serious symptoms [2].However, in infants and premature babies with lowered immunity, it is a major cause of pneumonia and bronchiolitis, acute respiratory infections.Some infants with this virus can have asthma even after they grow up. FI-RSV was made as a vaccine for RSV, but it was found out that it exacerbates the disease. There are three studies for countermeasures of RSV.The first study uses passive immunization.This uses amino acid F and G on the envelope which has a major role in the initial infection of RSV.We use palivizumab for infants with low immune system, the monoclonal antibody which targets amino acid F. Second study is about using antiviral.There is Rivavirin the antiviral for RSV, but the effect is not certain.The last study uses active immunity, and this is still in the middle of the process.Among the vaccines, recombinant vaccine is a vaccine injected into the nasal cavity which combines attenuated recombinant RSV mutants.Moreover it uses amino acid F as an antigen. Parainfluenza Virus (HVJ) Parainfluenza virus has a single RNA for genetic material and has an envelope.On the envelope, there are F protein, M protein, P protein and spike protein.It also has 5 serotypes. When the virus infects adults, it exhibits upper respiratory inflammation and when children are infected they exhibit bronchiolitis and pneumonia.Its main symptom is acute laryngotracheobronchitis but in 2014, as it gained heat, the scale of the virus became wider and showed severe symptoms such as pneumonia, bronchiolitis and degeneration of asthma.When the acute laryngotracheobronchitis is aggravated, it usually ends up in progressive cough that accompanies stridor, hyperventilation and inspiratory retraction.It develops phlegm, but in older age groups, it only develops slight symptoms. There is no vaccine but it is discovered that ultraviolet rays can deactivate the virus. Rhinovirus Rhinovirus has a single RNA for genetic material and does not have an envelope.It has more than 100 serotypes, so it is hard to prevent it by vaccine.It appears regardless of seasons and it usually appears on spring and autumn.It has a very low degree of heat and acid tolerance and mostly ends up in upper respiratory inflammation.When the pH goes below 6, the virus is deactivated unlike other enteroviruses.Until adulthood the host contains neutralizing antibody of almost all serotypes.Furthermore, regardless of serotypes the immunity last for 2 -16 weeks.Rhinovirus adheres to spe-cific cell acceptor to infect the cell.In particular, self-inoculation after contact between hands and touching the conjunctiva and nasal mucosa happen the most.The incubation period is almost zero and its main hosts are mammals.Acute upper respiratory inflammation and the mucus glands in the lower nasal mucosa show hyperactivity state and congest the nasal concha and close the exit of the paranasal sinuses.Children are frequently infected, and last for 4 -9 days.It does not display lower respiratory inflammation but shows bronchiolitis, pneumonia and asthma.Without side effects, it disappears in a short term.However ear infection, acute sinusitis and complications can occur due to closure of Eustachian tube and exit of paranasal sinuses. Infection of rhinovirus usually does not require treatment.However, antibacterial antibiotics are required if bacterial complications occur. Apriori Algorithm The apriori algorithm is mainly used to find association rules in data mining process.The algorithm drains the elements that are repeated in a section and extends to wider range and find the repetition of the same element [3].This process shows the overall disposition of the data.It also enables to compare the association rules of the various data groups. Decision Tree Algorithm The Decision Tree algorithm is used in rule mining.The algorithm continues to find a common node, the root node, with the categorized data and find features that can bind the respective data into a specific group [4].Branch points are divided by the binary code and lead to the next branch point until it reaches the last node shown on the screen [5].Although the whole process is not shown in the result, the apparent rule allows the prediction of the overall structure of the data. Apriori Results In Figure 1, we can know that among the amino acids, leucine received highest level.Isoleucine and lysine emerged as one.Moreover, the rest of the amino acid did not come out.In Figure 2, as Figure 1, we found out that it has a high level of leucine.Uniquely, it has valine even though its level is low.Figure 3 has a high level of Serine.Except that, it has lysine, leucine and lsoleucine as Figure 1 does.Finally, in Figure 4, as Figure 3, it has high level of Serine.Moreover not like other viruses, it has arginine, threonine and glycine. Decision Tree Results We experimented by using 10 fold cross-validation, and we figure out that each sequences did not follow the rule of other sequences.There was no data set that did not show any rule, but 17window of Table 2 only showed 2 rules.In contrast, Table 4 showed a lot of rules compared to other sequences.This is an error occurred from the problem in the method of experiment.Table 1 (parainfluenza virus), Table 2 (OC43), Table 3 (RSV), Table 4 (rhinovirus) had different length of sequences and to extract the result we had to amplify the sequence except for Table 2. Especially, Table 4 was amplified 4 times and had advantage in drawing out rules.In contrast, Table 2 was hard to draw out the rules.Also, unlike the average result, the frequency of class 2 is very high, which means the sequences are very different. Amino acid T which is shown considerably in Table 1 is threonine.For the outbreak of parainfluenza, virus L protein which activates NF-κB is needed and this protein requires AKT1 [6]. Amino acid F which is shown considerably in Conclusion According to apriori algorithm, there are two main features.Because every virus had that amino acid, it means that it could be the big reason of respiratory diseases.Unfortunately, the studies about the connection between Table 3 is Phenylalanine.It mediates viral proteins becoming viral particles.
2,170
2015-06-12T00:00:00.000
[ "Computer Science" ]
Microorganisms oxidize glucose through distinct pathways in permeable and cohesive sediments Abstract In marine sediments, microbial degradation of organic matter under anoxic conditions is generally thought to proceed through fermentation to volatile fatty acids, which are then oxidized to CO2 coupled to the reduction of terminal electron acceptors (e.g. nitrate, iron, manganese, and sulfate). It has been suggested that, in environments with a highly variable oxygen regime, fermentation mediated by facultative anaerobic bacteria (uncoupled to external terminal electron acceptors) becomes the dominant process. Here, we present the first direct evidence for this fermentation using a novel differentially labeled glucose isotopologue assay that distinguishes between CO2 produced from respiration and fermentation. Using this approach, we measured the relative contribution of respiration and fermentation of glucose in a range of permeable (sandy) and cohesive (muddy) sediments, as well as four bacterial isolates. Under anoxia, microbial communities adapted to high-energy sandy or bioturbated sites mediate fermentation via the Embden–Meyerhof–Parnas pathway, in a manner uncoupled from anaerobic respiration. Prolonged anoxic incubation suggests that this uncoupling lasts up to 160 h. In contrast, microbial communities in anoxic muddy sediments (smaller median grain size) generally completely oxidized 13C glucose to 13CO2, consistent with the classical redox cascade model. We also unexpectedly observed that fermentation occurred under oxic conditions in permeable sediments. These observations were further confirmed using pure cultures of four bacteria isolated from permeable sediments. Our results suggest that microbial communities adapted to variable oxygen regimes metabolize glucose (and likely other organic molecules) through fermentation uncoupled to respiration during transient anoxic conditions. Introduction In the absence of oxygen, microbial oxidation of organic matter is initiated by hydrolysis of macromolecules into smaller constituents (e.g.sugars, fatty acids, amino acids), which are then fermented to dissolved inorganic carbon (DIC), molecular hydrogen (H 2 ), alcohols, and volatile fatty acids (VFAs).Heterotrophic bacteria largely use one of three fermentation pathways for degradation of glucose: the Embden-Meyerhof-Parnas (EMP), pentose phosphate (PP) or Entner-Doudoroff (ED) pathways (Fig. 1) [1].EMP fermentation is generally most widespread and active [1], with recent studies indicating at least 90% of obligate and facultative anaerobes use the EMP pathway [2].After fermentation, the reduced compounds produced (VFAs, alcohols, and H 2 ) are rapidly oxidized by respiring bacteria, i.e. terminal respiration coupled with fermentation (Fig. 1).This paradigm has been relatively well developed and studied in cohesive sediments (i.e.muds and silts), which typically have stable physical and redox regimes that allow a close coupling between fermenting and respiring bacteria, e.g.Schulz and Zabel [3]. In contrast to cohesive sediments, microorganisms in permeable and bioturbated sediments experience dynamic redox regimes.In permeable sediments (i.e.sands and gravels), wave oscillations, sediment movement, and currents drive advective pore water exchange, which can lead to shifts between oxic and anoxic conditions on timescales of minutes to hours [4,5].In bioturbated sediments, similar shifts between oxic and anoxic conditions have been observed in relation to faunal pumping [6].These short-term redox variations select for metabolically f lexible microbes, adapted to varying electron donor and acceptor availability.Indeed, the dominant bacteria in permeable sediments are facultative anaerobic bacteria from the families Flavobacteriaceae and Woeseiaceae, which are capable of aerobic respiration, anaerobic respiration, and fermentation [7][8][9].In turn, these communities appear to oxidize organic carbon under anoxic conditions through distinct pathways to those in cohesive sediments.Specifically, they mediate organic carbon fermentation, but do not fully reoxidize derived end-products through respiration.Evidence for this comes from lower-thanexpected accumulation of end-products of anoxic respiration such as nitrogen (N 2 ), iron (Fe 2+ ), and sulfide (H 2 S) compared to DIC production, accompanied by the accumulation of hydrogen observed in f low-through reactors in anoxic permeable sediments Figure 1.Different carbon degradation pathways from the six-carbon glucose molecule; glucose can be fermented into: acetate and CO 2 by the EMP pathway; CO 2 , acetate, and lactate by the PP pathway; or CO 2 and acetate by the ED pathway; the oxidation of VFAs produced during fermentation (acetate, lactate, etc.) is typically coupled to the reduction of anoxic terminal electron acceptors; respiration can also occur directly using the glucose with no prior fermentation; R n values of each pathway, derived from 13 CO 2 ratios of different glucose isotopologues (see Calculations), are represented underneath; if fermentation remains uncoupled from respiration, VFAs are not consumed and R n values remain that of whichever fermentation occurs.[9,10].However, to date all inferences of fermentation have been indirect, and it remains unresolved through what pathways organic carbon is oxidized in anoxic permeable or bioturbated sediments. Here we used a differentially labeled 13 C glucose assay to compare oxidation pathways in sandy and muddy sediments.Nine sandy and three muddy sediments from Australia and Denmark were incubated with position-specific 13 C-labeled glucose isotopologues.Labeled either on the first carbon (1-13 C), second carbon (2-13 C), third carbon (3-13 C), or all six carbons ( 13 C 6 ) (Fig. 1), the ratio of 13 CO 2 produced in each treatment (R n ) was then used to determine the dominant carbon oxidation metabolism.Although this broad approach has long been used by biochemists for determining fermentation pathways of cultivated microbes [11,12], this is the first study to apply this approach quantitatively and to a range of sediments with complex microbial communities for direct comparison.We hypothesized that microbial respiration would dominate over fermentation of glucose in cohesive sediments, compared to permeable sediments, except if heavily bioturbated.Moreover, we expected permeable sediments to show increased coupling of fermentation to respiration after prolonged anoxic incubations of days to weeks in line with previously observed shifts in microbial communities under extended anoxia [9].To support these inferences, we also extended these assays to four bacterial isolates from permeable sediments, including members of the dominant family Flavobacteriaceae. Study sites Between March 2019 and March 2021, nine sandy sediments and three muddy sediments were sampled at sites across Australia and Denmark (Fig. S1).Sites spanned temperate to tropical locations and included both silicate and carbonate sediments.The biogeochemistry of each site has been previously studied, and references to these studies are provided in Table S1.Sandy sites were selected to encompass a range of hydrodynamic regimes and included three high-energy surface sands and three low-energy surface sands.Sites were grouped into high or low energy based on the observation of colored depth profiles of sediment cores and wave height and frequency.Sites with greater wave action displayed yellow oxidized sediments at greater depths (∼15 cm deep) with anoxic grey layers below this.At lower energy sites, the anoxic sediment (5-10 cm deep) was much darker.Higher-energy locations included Werribee Beach and Melbourne Beach, both enclosed within a large bay, yet still frequently exposed to medium to large waves.Heron Reef was also deemed high energy, located on a coral cay in the Great Barrier Reef.Sites deemed lower energy included Melbourne Harbour, located further up the beach from Melbourne Beach and partially shielded from oncoming waves by a harbor break wall.Low-energy sediments were also collected in Denmark from Faellesstrand, a shallow marine lagoon on the northeast coast of Fyn, and Hjerting Beach, which is shielded from open ocean by a barrier spit complex known as Skallingen.In addition, deeper lower energy sediments were collected at Melbourne Beach, Melbourne Harbour, and Faellesstrand.Both surface (0-15 cm) and deep (15-30 cm) sediments were collected from the subtidal zone.Surficial muddy sediments were sampled from estuaries around Victoria (Yarra, Gippsland, Patterson).Patterson sediment was heavily infaunated, which is typical for this site [13], while no significant fauna was observed at other sites. Slurry incubations Sandy sediments were sieved (2-mm mesh) to remove large debris and fauna, and seawater was filtered (1.6-11 μm) to remove most pelagic species potentially present in overlying water.Muddy sediments remained unsieved.Slurries were prepared with 20 g wet sediment and 35 ml filtered seawater in 60 ml incubation vials before sealing with a butyl rubber stopper (Sigma-Aldrich) and Wheaton closed-top seals (Sigma-Aldrich).Anoxic treatments were purged with nitrogen gas to exclude O 2 , while the headspace of the oxic treatment remained as unamended air, corresponding to a dissolved oxygen concentration of ∼230 μM.Melbourne Beach and Melbourne Harbour deep sediments (15-30 cm) were prepared in an N 2 -filled anoxic chamber to avoid exposure to oxygen.Faellesstrand deep sediments were not prepared in an anoxic chamber and hence were momentarily exposed to oxygen during set up.After a designated preincubation time (explained below), 13 C-labeled glucose isotopologues (1-13 C, 2-13 C, 3-13 C, or 13 C 6 , Cambridge Isotope Laboratories Inc., 98%-99%) were added (50 μM) in parallel with three replicate slurries each (Fig. S2) and placed on the orbital shaker (150 rpm, dark, 20 • C) for 4 h.Subsequently, these slurries were left to settle for 5 min, then opened and subsampled. To determine organic carbon oxidation pathways at four distinct time points, we employed preincubation periods of 0, 1, and 7 days before addition of 13 C-labeled glucose followed by a 4-h incubation (Fig. S2).These preincubation periods were chosen because previous studies have shown that sulfide production in permeable sediment commences after several days of anoxia and that there is a shift in the microbial community to sulfate reducers over this period [9].We therefore expected to see a distinct temporal sequence from fermentation to increased respiration over this timeframe.For the 0-day preincubation, glucose was added immediately after the preparation of the slurries.For 1and 7-day preincubations, slurries were put on an orbital shaker for gentle agitation (150 rpm) in a light-proof box at 20 • C for the designated period before glucose addition.After the preincubation period, the 13 C-labeled glucose isotopologues were added, while three initial slurries (t 0 ) were opened and subsampled simultaneously, as described below, to determine the background concentrations of DIC and 13 CO 2 (Fig. S2).Slurries with added 13 C-labeled glucose were then replaced on the orbital shaker. Note that experiments were not set up to mirror the constantly f luctuating redox conditions; instead prolonged oxic and anoxic incubations enabled observation of otherwise transient processes. Flow through reactor experiment Surface sand (0-10 cm) from Melbourne Beach was sieved (1 mm mesh), homogenised, and packed into 6 cylindrical reactors (4.2 cm length, 4.8 cm inner diameter) as previously described [9,10,14].Filtered seawater (0.7 μm, GF/F) amended with 1-13 Cor 13 C 6 -labeled glucose (50 μM) was pumped through f low through reactors (FTRs) using a peristaltic pump at a f low rate of ∼45 ml h −1 with residence time of the pore water ∼1 h.Both labeled seawater reservoirs were contained in 10-L high-density polyethylene carboys, which were replaced every ∼48 h.Due to the large volume of the reservoirs, 1-13 C and 13 C 6 labeled glucose were most cost-effective so were selected over other 13 C glucose isotopologues.O 2 levels were monitored from the FTR outlet using a f low-through O 2 -sensitive probe (PyroScience FireSting).Glass syringes collected seawater from the outlets and reservoirs for DIC, 13 CO 2 , and VFA analysis every few hours.Both seawater reservoirs were bubbled with ambient air at the first sampling point (0 h), before transitioning to anoxia by purging with 800 ppm CO 2 in pure N 2 using a digital gas mixer (Vögtlin). Isotopic and biogeochemical measurements Samples for measurement of DIC concentration (3 ml) and 13 CO 2 concentration (12 ml) were collected in gastight glass vials (Labco Exetainer), preserved with 10 μl 6% HgCl 2 and stored with no headspace.Prior to 13 CO 2 analysis, 4 ml of sample in the 12-ml vial was replaced with helium.Phosphoric acid (12.5 mM) was added to the sample to convert DIC to CO 2 before being analyzed on a Hydra 20-22 Continuous Flow Isotope Ratio Mass Spectrometer (CF-IRMS; Sercon Ltd., UK).Total DIC concentration was measured using a DIC analyser (Apollo SciTech).Nitrate and sulfate concentrations were not measured; however, we know from previous measurements background that nitrate is likely to be <10 μM [10], and sulfate will be in the order of seawater concentrations at 28 mM.Sediment grains were sized using a Beckham Coulter LP13 320 particle sizer after soaking in sodium hexametaphosphate for 24 h and sonicating for 10 min.FTR samples for VFA analysis (2 ml) were collected in 4 ml ashed borosilicate glass vials with Tef lon lined lids (Sigma Aldrich) and frozen.Upon analysis, samples were thawed and derivatized as previously described (Albert and Martens, 1997), before injection into reverse phase High Performance Liquid Chromatography (HPLC) combined with preconcentrator and guard column (Agilent SB-C8 4.6 × 12.5 mm) and analytical column (Agilent SB-C8 4.6 × 250 mm). Isolation, cultivation, and sequencing of bacteria We designed a strategy to culture facultative anaerobes representative of the permeable marine sediment communities of Port Philip Bay, Victoria.Sediment and seawater from Melbourne Beach were collected in November 2020 and combined in 180 ml vials to form a slurry.The slurry was supplemented with 1 mM glucose and sealed with a butyl rubber stopper before being made anoxic by purging with N 2 .The vial was then placed on an orbital shaker (150 rpm, dark) at room temperature (20 • C).After a 14-day incubation, plates of Marine Agar 2216 medium (Difco) supplemented with 1 mM glucose were prepared in Petri dishes and inoculated with a portion of the slurry and incubated aerobically at 30 • C for 3 days.Following this incubation, individual bacterial colonies were repeatedly transferred to fresh Marine Agar 2216 + 1-mM glucose plates for purification and identification.16S rRNA genes from each bacteria were amplified using colony PCR and visualized via gel electrophoresis [15], before extraction (Isolate II PCR & Gel Kit, Bioline) and whole-genome sequencing.Cellular DNA was sent to MHTP Medical Genomics Facility, Hudson Institute of Medical Research for whole-genome sequencing.The library preparation was performed using the Illumina Nextera XT DNA library prep kit with unique dual Indexing, which was then passed to 150 bp paired end sequencing on a NextSeq2000 platform (Illumina).Raw shotgun sequences were subjected to quality filtering using the BBDuk function of the BBTools v38.80 (https://sourceforge. net/projects/bbmap/), which sequentially removed contaminating adapters (k-mer size of 23 and hamming distance of 1), PhiX sequences (k-mer size of 31 and hamming distance of 1), bases from 3 ends with a Phred score below 20, and resultant reads with lengths shorter than 50 bp.Quality-filtered reads were assembled using Unicycler v.0.4.7 (-mode normal, -keep 0) to obtain draft genomes [16].The purity of the isolates was corroborated by the absence of foreign DNA and the presence of a sole 16S rRNA gene in the assembled genomes.The taxonomy of the newly isolated bacteria was assigned by GTDB-Tk v1.4.0 [17] with reference to the Genome Taxonomy Database (GTDB) R06-RS202 [18].Annotation of the genomic features and metabolic capabilities was performed using DRAM v1.2.4 [19], with dbCAN2 database, MEROPS peptidase database, and KEGG protein database (accessed 22 November 2021).We additionally searched the genomes against our custom databases (doi: 10.26180/c.5230745)for the presence of key metabolic marker genes involved in using various electron acceptors and donors using DIAMOND v.2.0.11[20], with cut-offs reported previously [21]. Pure culture 13 C isotopologue glucose assay and volatile fatty acid measurements Marine Broth 2216 medium (Difco) was prepared in 180-ml vials, sealed with butyl rubber stoppers, and the headspace kept under anoxic (purged with N 2 ) and oxic (purged with air) conditions.Each vial was then inoculated with a bacterial strain to an OD 600 of 0.02 before addition of 50 μM 13 C-labeled glucose isotopologues (1-13 C, 2-13 C, 3-13 C, or 13 C 6 ) as outlined previously.Simultaneously, three initial incubations (t 0 ) that were not treated with glucose were opened and subsampled to determine background concentrations of 13 CO 2 .Vials with added 13 C-labeled glucose were then placed on an orbital shaker (150 rpm, 30 • C) for 4 h before final sampling of 13 CO 2 .Samples for measurement of 13 CO 2 concentration were collected and analyzed as described in Isotopic and biogeochemical measurements section. Bacterial incubations for volatile fatty acid analysis Incubations were set up to monitor growth (OD 600 ) of both bacteria under oxic and anoxic conditions as well as VFA production.About 50 ml Marine Broth 2216 medium (Difco) supplemented with 1 mM glucose was prepared in 180-ml vials before bacteria was inoculated to an OD 600 of 0.05 and vials sealed with butyl rubber stoppers.Incubations were made anoxic by purging with N 2 or constantly f lushed with ambient air to ensure the headspace remained oxic.Vials were then placed on an orbital shaker (150 rpm) at 30 • C for a week.About 1 ml sample was removed each day and the optical density at 600 nm (OD 600 ) measured (1-cm cuvette; Eppendorf BioSpectrometer basic).Based on the anoxic growth curves of each bacteria, these cultures were incubated for 2 and 4 days, which correlates to early and midstationary phase.At these time points, the culture was transferred to centrifuge tubes and centrifuged at 4500 × g.Samples of the supernatant (3 ml) were filtered (0.2 μm) and frozen in 4-ml ashed borosilicate glass vials with Tef lon lined lids (Sigma-Aldrich) for VFA analysis.Media controls without bacteria were also run.Analysis of VFA samples proceeded as described in Isotopic and biogeochemical measurements section. Slurry incubations Depending on the 13 C-labeled glucose isotopologue added (1-13 C, 2-13 C, 3-13 C, 13 C 6 ) and the metabolism taking place, carbon in different positions will be converted to 13 CO 2 , and a 13 CO 2 ratio (R n ) can be calculated (Fig. 1). The excess concentration of 13 CO 2 produced in incubations amended with glucose labeled at the 1C, 2C, and 3C positions was derived using the difference in r from the beginning (t 0 ) to the end of the 4-h incubations and DIC concentration such that 13 where r is the ratio of masses 45/44 and n = 1, 2, 3 (position of labeled carbon atom).The same is calculated for the excess concentration of 13 CO 2 produced in incubations amended with glucose labeled on all six carbon atoms. 13 13 CO 2 ratios (R n ) for each labeled carbon position are then derived by normalizing against the 13 C 6 treatment (Fig. S3, Table S2), such that where n = 1, 2, 3. and For example, respiration of 3-13 C glucose results in a 13 CO 2 ratio of 0.17, as only one out of six carbons becomes 13 CO 2 (Fig. 1). When respiration is inactive, R n will resemble that of a fermentation pathway (EMP, PP, ED). For example, when bacteria undergo EMP fermentation using 3-13 C glucose, one of two CO 2 produced will be labeled, such that Alternatively, if bacteria undergo PP fermentation using 1-13 C glucose, the only CO 2 produced will be labeled, such that Total carbon oxidation was then distributed into fractions as respiration-driven (f resp ), EMP fermentation-driven (f EMP ferm ), ED fermentation-driven (f ED ferm ), or PP fermentation-driven (f PP ferm ). A best fit for the contribution of the metabolisms (respiration, EMP fermentation, ED fermentation, PP fermentation) to the total rate of CO 2 production was estimated from all R n values (Fig. 2 and Table S3).This fit was performed by minimizing the total sum of squares of the error between observed and theoretical R n values using the R package Rsolnp [22] and subject to the constraints that all contributions are positive and add up to 1 [23].Carbon oxidation for each incubation is often distributed as more than one fraction and, in these instances, we defined the dominant metabolic pathway as that responsible for the largest fraction of 13 CO 2 production.It should be noted that some fermentations do not produce CO 2 [24], and therefore, this approach will not quantify these pathways.However, we observed less than or equal to the expected accumulation of VFAs (see results) based on the calculated fractions of fermentation, and it is therefore unlikely such pathways are occurring at significant rates. Bacterial cultures 13 CO 2 ratios of pure culture glucose assays were calculated similarly to above (Fig. S4, Table S2); however, DIC was not accounted for.Instead, the excess concentration of 13 CO 2 produced in incubations amended with glucose isotopologues (1-13 C, 2-13 C, 3-13 C, 13 C 6 ) was derived using only the difference in r from the beginning (t 0 ) to the end of the 4-h incubations such that 13 where r is the ratio of masses 45/44 and n = 1, 2, 3 (position of labeled carbon atom).The same is calculated for the excess concentration of 13 CO 2 produced in incubations amended with glucose labeled on all six carbon atoms. 13CO 2 13C 6 = r (10) 13 CO 2 ratios (R n ) for each labeled carbon position and a best fit for the fractional contribution of the four metabolisms are then derived as above (Equations ( 3)-( 8)).The standard deviation of each metabolism is determined by propagating the standard deviations of the raw ratios through the calculation. Flow through reactor experiment The excess concentration of 13 CO 2 produced in the reactor eff luent amended with glucose labeled at the 1st and on all six carbon atoms is also calculated using Equations ( 1) and ( 2) except the difference in r is determined between the reservoir and sedimentpacked reactors.As only 1-13 C and 13 C 6 labeled glucose were used, R 1 is derived using Equations ( 3) and ( 4) (Tables S2 and S5). Given that we only used two isotopologues for this experiment, we assumed that respiration and EMP fermentation dominated (consistent with the four isotopologue experiment), and total carbon oxidation was distributed as respiration-driven (f resp ) or fermentation-driven (f EMP ferm ) based on the prominence of R 1 values: where R 1 is determined following Equation ( 3) and 1/6 and 0 follow from the theoretical R n values (see Fig. 1). Solving Equations ( 11) and (12) gives Equation ( 13), which predicts the proportion of carbon oxidation driven by EMP fermentation at each time point.Following Equation ( 11), the remaining carbon oxidation is then assumed to be the fraction driven by respiration (f resp ).Again, EMP fermentation or respiration is then determined as the dominant metabolic pathway when the fraction is >0.5. Fermentation dominates anoxic glucose degradation in permeable sediments We distinguished glucose oxidation of each sediment as respirationdriven (f resp ), EMP fermentation-driven (f EMP ferm ), ED fermentationdriven (f ED ferm ), or PP fermentation-driven (f PP ferm ) through the pathways summarized in Fig. 1 (see "Calculations").Across all sediments, glucose metabolism was typically dominated by either respiration or EMP fermentation, though ED fermentation did contribute in some incubations (Figs 2 and S3, Tables S2 and S3). In line with the classical redox cascade, respiration is dominant in low-energy, unperturbated muddy sediments (Yarra, Gippsland) across both oxic and anoxic incubations up until 7 days.In contrast, fermentation was the dominant glucose mineralization pathway in most permeable sediments.Microbial communities in surface sands from multiple sites all undertook EMP fermentation under anoxia.EMP fermentation typically dominated (f EMP ferm > 0.5) within the first hours of anoxia (Anoxic 0) for nearly all surface sediments and persisted for up to 24 h (Anoxic 1).Respiration typically increased following 3 and 7 days of anoxic incubations.While some of the deep sediments show fractions of EMP fermentation under initial anoxia, they are all more respiration-driven than the surface sands of the same location.The observations that respiration is higher in deeper sands or following long-term incubations are consistent with more stable redox conditions favoring activity of sulfate-reducing bacteria.Surprisingly, substantial fractions of both EMP and ED fermentation were also observed in oxic incubations, particularly in highenergy surface sands including Werribee and Melbourne Beach. Comparing the total fraction of fermentation (f EMP ferm + f ED ferm + f PP ferm ) with respiration (f resp ) shows surface and deep sands have similar average fractions under initial anoxia, though large differences are seen in muds (Fig. 3A).The total fraction of fermentation on the first day of anoxia (Anoxic 0-1) for each sediment was then compared against median grain size as a measure of hydrodynamic energy (Fig. 3B).High-energy sites with larger grain sizes such as Werribee Beach had the highest fraction of fermentation, while fine-grained muddy sediments such as Yarra Table 1.The amount of CO 2 and acetate accumulated in the reservoir and FTR in μM C equivalents as per Fig. 5; CO 2 accumulation in the reservoir was based on acetate accumulation assuming a 2:1 acetate:CO 2 C equivalent for fermentation C 6 H 12 O 6 + 2 H 2 O 2 CH 3 COOH + 2 CO 2 + 4 H 2 ; acetate accumulation in the FTR was based on the difference between acetate concentrations in the reservoir and FTR outlet; CO 2 production in the FTR was based on measured 13 CO 2 production in the FTRs.f EMP ferm is calculated as per Equation ( 9); calculated acetate production in the FTR is based on 13 Estuary had fermentation fractions close to 0. Altogether, this produced a strong positive correlation (R 2 = 0.60, P = .001)when the notable outliers of highly bioturbated muds (Patterson) and large grain size carbonate sands (Heron) are excluded (Fig. 3B).Average total fractions of fermentation of each sediment were also plotted against DIC production, but no correlation was observed (data not shown). A shift from fermentation to respiration during long-term anoxic incubations We investigated the effect of extended anoxic conditions on the transition between fermentation and respiration using FTR experiments where we measured fermentation, respiration, and VFA production (Fig. 4).This experiment showed the dominance of respiration during the initial oxic conditions from 0 to 18 h (Fig. 4A and B, Tables S4 and S5), then transitioned to a dominance of EMP fermentation at 46 h (after 28 h anoxia).This was followed by a return to a dominance of respiration at 100 h where it remained until the experiment finished at 160 h.We measured the concentrations of various VFAs to determine whether fermentative end products were excreted (Fig. 4C, Table S7).At the onset of anoxia, acetate and lactate production dominated (10 μM mean concentration for both), after which acetate production dominated reaching a maximum concentration of 40 μM at 160 h.Formate was consistently consumed by the FTRs.In order to account for fermentation products, we undertook a mass balance of CO 2 and acetate produced in the reservoir and FTR at 46 h (anoxic fermenting) and 160 h (anoxic respiring, Table 1).At 160 h, we were able to account for 102% of the glucose respired as acetate and CO 2 ; however, at 46 h, we were only able to account for 66% of the glucose in the form of CO 2 and acetate.To compare the observed accumulation of acetate with that expected from fermentation, we applied the fraction of fermentation from the 13 C glucose assay to the measured amount of 13 CO 2 produced and calculated an expected acetate accumulation equivalent.At 160 h, we were able to account for 110% of the expected acetate accumulation, but we were only able to account for 12% of the expected acetate at 46 h indicating an unknown sink for fermentation products. Bacterial cultures validate occurrence of oxic and anoxic fermentation To further compare respiration, fermentation, and VFA production dynamics in bacterial cultures under oxic and anoxic conditions, we isolated species Lutibacter sp., Vibrio sp., Tropicimonas sp., and Maribacter sp. from marine sediments and quantified the fractions of fermentation and respiration under short-term oxic and anoxic conditions (Figs 5A and S4, Tables S2 and S6).All species showed a dominance of EMP fermentation (f EMP ferm > 0.5) under anoxic conditions, though we also observed a distinct fraction of PP fermentation in Vibrio (f PP ferm = 0.39).Under oxic conditions, metabolism was surprisingly also dominated by fermentation through the EMP pathway for most species (f EMP ferm > 0.5), with the exception of Tropicimonas for which respiration comprised half of CO 2 production (f resp = 0.50) (Fig. 5A).Tropicimonas and Maribacter, both capable of respiration under anoxic conditions, possess genes for nitrous oxide reduction (Fig. S5).Additionally, Tropicimonas, which displayed the greatest fraction of anoxic respiration, has genes for nitrate and nitric oxide reduction (Fig. S5).It should be noted that the different pathways are integrated over the 4-h experiment and that it is likely that they occur sequentially as the glucose is consumed and thermodynamics change.In extended incubations to measure VFA accumulation, both Lutibacter and Maribacter reached stationary phase under oxic conditions within 1 day of incubation (Fig. 5B and C).Lutibacter was able to grow under anoxic conditions, albeit at a much slower rate and to a lower yield than under oxic conditions, reaching stationary phase after ∼3 days.Maribacter showed no growth under anoxic conditions.Under oxic conditions, both bacteria produced VFAs including acetate, formate, propionate, isobutyrate, butyrate, succinate, isovalerate, and valerate (Fig. 5D and E), with concentrations highest after 4 days under oxic conditions in the Maribacter culture.Under anoxic conditions, the highest concentrations of VFAs were observed after 4 days of anoxia, including acetate, propionate butyrate, and succinate. Discussion Most carbohydrates within the sediment exist in a polymeric form, which are hydrolyzed to monomeric sugars such as glucose.Our addition of 50 μM glucose was most likely higher than natural concentrations of ∼1 μM in permeable sediments [25].Of critical importance here is the concentration of glucose relative to the saturating concentration for glucose uptake (fermentation), which can range from 0.2 to 23 μM for particle-associated bacteria [26].At the lower end of this range, our glucose addition is unlikely to have increased the rate of glucose uptake, while at the higher range, this could have greatly stimulated glucose uptake (fermentation) leading to an accumulation of fermentation products before respiring bacteria were able to assimilate them [27].It is therefore possible that the "fermentation" proxy we are measuring with the isotopologues represents a temporal decoupling between fermenting and respiring bacteria.If this were occurring, then we would expect to see a substantial release of VFAs after glucose addition, which was not observed (Table 1, Fig. 4C see also discussion below).It should therefore be recognized that fermentation (or some fraction of it) may have been induced by the glucose tracer addition. Fermentation of glucose dominated in most permeable sediments, with EMP fermentation being the predominant carbon oxidation pathway (f EMP ferm > 0.5, Figs 2 and 4B).Within permeable sediments, it is likely that redox conditions are constantly f luctuating due to periodic incorporation of oxygen and other electron acceptors into the sands by sediment resuspension and advective pore water f low [5,28].The microbial community in these high-energy sands is known to be dominated by metabolically f lexible generalist bacteria of families Flavobacteriaceae and Woeseiaceae that ferment under anoxic conditions [7][8][9]29], and the culture experiments confirmed this with all species showing a dominance of fermentation under anoxic conditions (Fig. 5A).Metabolism in these high-energy sandy sediments trended toward respiration between 1 to 7 days of anoxia (Fig. 2).In agreement, FTR incubations show the evolution of fermentation in Melbourne Beach sediments, with fermentation dominating upon initial anoxia before switching back to respiration after ∼2 days of anoxia (Fig. 4A and B).This is consistent with previous observations that after 4-10 days anoxia, there is an enrichment of sulfate reducers of families Desulfobacteraceae and Desulfobulbaceae, which leads to a recoupling between fermentative and anaerobic respiratory bacteria [7][8][9]30]. Comparing the fractions of fermentation within the first day of anoxia (Anoxic 0-1) against median grain size for each site showed a positive correlation (P = 0.001), with two notable outliers (Fig. 3B).Hydrodynamic energy at each site controls both grain size and the frequency of sediment re-oxygenation [28,31].Coarse-grained sediments will be more oxygenated and likely dominated by facultative aerobes, and finer grain sizes more anoxic and dominated by coupled anaerobic fermentative and respiratory bacteria.This relationship is therefore consistent with our hypothesis that fermentation uncoupled to respiration will dominate in more dynamic higher-energy settings. The two noted outliers that do not conform to this relationship are a highly bioturbated mud and coarse grain carbonate sediments.In the case of the bioturbated mud, there was much more fermentation than would be expected based on the grain size alone and fermentation dominated immediately after the onset of anoxia (Fig. 2).The exact mechanism driving the fermentation is unclear, but we suggest it could be linked to the activity of benthic fauna that can oxygenate sediments on a time scale comparable to that of advective transport [32][33][34][35].Similar to sandy sediments, pore waters f luctuate between oxic and anoxic conditions, which may select for metabolically versatile bacteria that can survive in both oxic and anoxic conditions [7][8][9].Previous studies of infaunated sediments have shown that microbial communities indeed differ between bioturbated and nonbioturbated zones [36,37], with dynamic and re-worked bioturbated sediments featuring generalist fermenters such as Flavobacteriaceae [37].The occurrence of fermentation at the Patterson River suggests that our hypothesis of f lexible generalist communities adapting to dynamic conditions by fermenting extends beyond sandy sediments to other temporally oxic/anoxic sediments.In the case of the coarse-grained carbonate sediments, there was much more respiration than would be expected from grain size.Carbonate sediment grains are known to be porous and highly biologically active and hence harbor permanently anoxic zones within the grains, which can enhance anoxic processes such as denitrification [38].As such, it is highly likely that even though this sediment type is highly oxygenated around the grains, there is a community of coupled fermentative and respiring bacteria within the grains that fully respire the added glucose tracer. As noted previously, the 50 μM glucose addition may have stimulated fermentation and hence led to a temporal decoupling of respiration and fermentation.If this is the case here, then an alternative explanation for our observations is that cohesive and less disturbed sediments have a higher capacity to assimilate fermentation products or a lower saturation concentration for glucose fermentation.Further studies with direct glucose measurements and lower glucose tracer additions are required to definitively validate this hypothesis. The observation that significant rates of fermentation occurred in the high-energy sediments under oxic conditions (Fig. 2) was unexpected.These observations were consistent with the results from the pure culture experiments, which showed that facultatively aerobic bacteria isolated from Melbourne Beach including Lutibacter, Vibrio, and Maribacter undertook fermentation (f EMP ferm > 0.5) during oxic conditions (Fig. 5A).Vibrio and Tropicimonas undertook significant fractions of ED fermentation as well (f ED ferm = 0.15 and 0.31, respectively).The production of VFAs under oxic conditions in both pure culture experiments is also consistent with oxic fermentation.Fermentation under oxic conditions at high glucose concentrations (>10 mM range) has been well documented in the literature and is known as the "Crabtree" effect or overf low metabolism in bacteria [39][40][41].Although fermentation yields less energy per mol of glucose than oxidative phosphorylation (complete oxidation to CO 2 ), it can proceed much more rapidly and therefore yield more energy [42] as well as maximizing growth relative to proteome formation [43].Within the context of cancer cells, this metabolism is known as the Warburg effect, and it has been argued the reason for this is that it rebalances the Krebs cycle in proliferating cells to provide anabolic (biomolecules) as opposed to catabolic (CO 2 , ATP) products required for cell growth [44].Fermentation under oxic conditions may therefore convey an ecological advantage in dynamic environments such as permeable sediments supporting rapid growth when oxygen and organic matter is often transiently available [45].The fact that we were able to observe fermentation under oxic conditions in both cultures and natural sediments exposed to lower glucose concentrations (50 μM for the isotope assays) suggests that oxic fermentation may be a common phenomenon in bacteria even at low glucose concentrations and warrants further investigation.In addition, the increased predominance of the ED fermentation pathway under oxic conditions at some sites and cultures (e.g.Maribacter and Tropicimonas) is consistent with the paradigm that it can be a major catabolic pathway for glucose under oxic conditions [46]. Our results also support previous observations that there is very little acetate production relative to CO 2 in anoxic permeable sediments [10].A mass balance for CO 2 and acetate produced in the FTR at 46 and 160 h provides further insight into the fate of glucose under anoxic fermenting (46 h) and respiring (160 h) conditions.At 160 h, we were able to account for 102% of the glucose respired as acetate and CO 2 (Table 1).However, at 46 h, we were only able to account for 66% of the glucose in the form of CO 2 and acetate.Furthermore, if we apply the fraction of fermentation calculated using the 13 C glucose assay to the measured amount of 13 CO 2 produced, we can also calculate an expected acetate accumulation equivalent (Table 1).At 160 h, the calculated acetate accumulation was 74 μM C equivalents, compared to 82 μM C equivalents measured (110% of that expected).At 46 h, the calculated acetate accumulation was 145 μM C equivalents compared to a measured value of 18 μM C equivalents (12% of that expected).This suggests that the fermentation uncoupled to respiration observed here is not as a result of a transient accumulation of VFAs stimulated by glucose addition [27,47].It is possible that the missing acetate was assimilated by polyhydroxyalkanoate-accumulating organisms, which have been observed in permeable sediments [48] or stored as lipids [10,30].Consistent with this, the culture experiments showed that Lutibacter and Maribacter produced negligible VFAs after 2 days anoxia.Furthermore, Lutibacter was able to increase its biomass under anoxic conditions, which suggests Lutibacter may be able to assimilate some of the fermentation products into storage molecules, which have recently been suggested to comprise an under-appreciated form of biomass [49].The exact nature of this process remains to be determined and requires further investigation. Conclusion Using isotopologues of 13 C labeled glucose, we show through slurry and FTR incubations that EMP fermentation occurs in transiently anoxic sandy sediments, remaining uncoupled from respiration processes for up to 160 h.Factors controlling the frequency of sediment re-oxygenation such as hydrodynamic energy and bioturbation exert a strong control on the extent to which glucose fermentation decouples from respiration at the onset of anoxia.Oxic fermentation was observed in both sands and bacterial cultures and suggests that this process may be significant in the environment.Our understanding of the extent to which this occurs with naturally present carbohydrates and the fate of the organic fermentation products remains incomplete and the question remains as to what the final products of fermentation are. Figure 2 . Figure 2.Fractions of respiration, ED, EM, and PP fermentation occurring at each site during each incubation, as determined by R package Rsolnp using13 CO 2 ratios (see Calculations); incubations include oxic (0), anoxic (0), anoxic (1), and anoxic(7); numbers in parentheses represent number of days of anoxia pretreatment before the glucose assay (4-h duration) was undertaken; sands and muds are arranged from high-energy sites to low-energy sites and are also grouped into surface and deep. Figure 4 . Figure 4. FTR experiments were run under oxic conditions for 18 h then transitioned to anoxia (dotted line, A) and remained anoxic for the rest of the experiment (grey shading); fractions of EMP fermentation (f EMP ferm ) and respiration (f resp ) were estimated from R 1 values (B, see calculations); respiration is dark grey and EMP fermentation is light grey; net concentration changes (reservoir values subtracted from FTR outlets) of acetate (•), formate ( ), propionate ( ) and lactate ( , C); acetate and formate concentration apply to the left y-axis while propionate and lactate values are on the right y-axis; error bars show the standard deviation. Figure 5 . Figure 5.A fractions of respiration, ED, EM, and PP fermentation occurring in oxic and anoxic incubations of bacterial species Lutibacter sp., Vibrio sp., Tropicimonas sp., and Maribacter sp. as determined by R package Rsolnp using 13 CO 2 ratios (see Calculations); growth curves of B Lutibacter and C Maribacter over 1-week incubation under both oxic and anoxic conditions; VFA concentrations in the cultures of D Lutibacter and E Maribacter after 2 and 4 days under oxic and anoxic conditions; error bars show the standard deviation.
9,312.8
2024-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Is HPS a valuable component of a STEM education? An empirical study of student interest in HPS courses within an undergraduate science curriculum This paper presents the results of a survey of students majoring in STEM fields whose education contained a significant history, philosophy and sociology (HPS) of science component. The survey was administered to students in a North American public 4-year university just prior to completing their HPS sequence. The survey assessed students’ attitudes towards HPS to gauge how those attitudes changed over the course of their college careers, and to identify the benefits and obstacles to studying HPS as a component of their STEM education. The survey reveals that students generally found unexpected value in taking HPS within their STEM curriculum. It also reveals that framing HPS courses as a means of gaining communication skills necessary to be an influential scientist seems to resonate with students. However, students also identified several factors limiting engagement with HPS content, including the length and density of required readings and assessment via essays and papers. disciplines like history and sociology -when educating scientists, particularly at the university level. In a widely shared piece on Aeon, Subrena Smith argues that philosophy of science can play an important role in university-level science education and should not be made subservient to the sciences (Smith, 2017). Grüne-Yanoff (2014) articulates several benefits that philosophy of science can bring to science training to help create better scientists and suggests altered teaching approaches to realize those benefits more quickly. Even more recently, the prominent history of science journal ISIS included a focus section on pedagogy (see Rader, 2020). Though not primarily targeting a higher-ed audience, the journal Science Education has had a Science Studies and Science Education section since 2008 (see Duschl et al., 2008 and articles therein and since). Within these articles, it is not uncommon to find analyses of obstacles that arise when attempting to engage STEM students in courses that reflect upon science. Within the ISIS focus section, Vivien Hamilton, a historian of science, and Daniel M. Stoebel, a biologist, reflect on concerns that what scientists' see as helpful analyses of historical episodes for training students may be at odds with the nuanced and non-judgmental narratives historians of science are comfortable providing (Hamilton & Stoebel, 2020). Often however, many obstacles are seen as arising from STEM students themselves: there is a perception that STEM students are resistant to non-STEM courses in their curriculum. Smith cites episodes where students question her authority in the classroom and remarks that "students are doubtful that philosophers have anything useful to say about science" (Smith, 2017). Similarly, Till Grüne-Yanoff (2014) paints a picture of science students as disinterested in, and irked by, mandatory non-science breadth requirements, including philosophy of science. He claims university science students not only lack knowledge about how to reflect upon science, but they have little grasp of science itself. STEM student resistance to such courses may have social and educational consequences. For example, there is recent evidence that medical students trained in the humanities or interpretive social sciences display greater levels of empathy compared to those receiving a positivist STEM education, empathy which in turn makes them more effective with their patients (Olsen & Gebremariam, 2020). Additional study is required, but it is reasonable to suggest resistance to this training might diminish empathy, if it is indeed learned. Furthermore, student engagement is widely accepted as an important influence on student learning, especially among the lowest-ability students (Carini et al., 2006;Kahu, 2013;Trowler, 2010). Yet, outside of personal anecdotes, there is little data available to suggest that STEM students are or are not resistant to classes that reflect upon science. And even if they are resistant, is that resistance overcome with experience? Do students end up finding value in courses that encourage them to reflect upon science? The way these courses should be taught to undergrads, and how effective they might be, depends on the answers to these questions. This contribution attempts to answer the above questions empirically. It presents results of a survey of fifty-two students majoring in STEM fields whose education contained a significant history, philosophy and sociology (HPS) of science component. The survey was administered to students in a North American public 4-year University just prior to completing their HPS sequence. The survey asked students about their attitudes towards HPS, about how those attitudes changed over the course of their college careers, and about the benefits and obstacles to studying HPS as a component of their STEM education. The survey reveals that students generally found HPS to be a valuable addition to their STEM curriculum. It also reveals that framing HPS courses as a means of gaining the communication skills necessary to be an influential scientist seems to resonate with students. However, students also identified several factors limiting engagement with HPS content, including the length and density of required readings and assessment via essays and papers. This research suggests ways to positively alter student experiences of HPS to more quickly overcome student resistance and enhance learning. Overall, the value that students find in HPS courses helps to justify the inclusion of HPS as part of a STEM curriculum. Survey background: HPS within a STEM curriculum To assess non-major student interest and student-perceived value of HPS courses, the HPS Experience Survey was created to identify STEM students' beliefs, motivations, and aspirations regarding HPS courses. The survey was administered in Fall 2020 at Michigan State University-a four-year public university located in the Midwest region of the United States. Students taking the survey were enrolled in Lyman Briggs College (henceforth "LBC") one of several undergraduate "residential colleges" within the larger University and the only one that admits solely STEM students. As such, students must major in a STEM field (or in HPS) to graduate from LBC. The unique setup of LBC and the way it incorporates HPS into a STEM curriculum are factors that deeply influence student experiences of HPS and possible interpretations of the survey. Admission into LBC is optional and requires no prerequisites beyond acceptance into the University and a desire to major in a STEM field (which includes HPS). LBC admits about 600 students each year from the University. This means that students self-select into, and can self-select out of, LBC. The perception on campus is that LBC is an honors college for would-be medical doctors. In actuality, not all, or sometimes even a majority, of LBC's students are members of the University's Honors College. However, students do often major in the biomedical sciences. For example, in 2018 there were 33 different majors declared by students in LBC. Of those majors, Human Biology (282), Neuroscience (202), and Physiology (102) were the most popular, and the only other majors with more than 50 students were Genomics & Molecular Genetics and Biochemistry. Beyond its small size, admission into LBC uniquely affords students access to certain introductory STEM courses and the HPS sequence. LBC's students are given exclusive access to smaller-enrollment introductory science courses taught by college faculty (in Math, Chemistry, Biology, and Physics). These courses often emphasize experiential learning more than their counterparts offered outside LBC and are perceived by students to be more difficult. Since only introductory-level science courses are taught in LBC, students complete their more advanced STEM courses in the relevant department within the broader university, but they return for their HPS courses. Students are also afforded the ability to fulfill some of their university distribution requirements through LBC's HPS sequence. The HPS sequence typically consists of four courses that replace the university mandated introductory humanities and social science courses while also fulfilling the university's writing requirement. The first course is an introduction to history, philosophy, and sociology of science, taken in a student's first year (unless exempted due to advanced placement credits). Enrollment in these courses is typically capped at 24. The second and third courses are taken in the third year of study, after students complete their science courses in LBC. These third-year courses cover a variety of different themes, for example "Science and the Public", "Science and the Environment" or "Science of Sex and Gender." The enrollment in these courses is typically capped at 30. The final course in the sequence is a capstone course, designed to be taken in a student's last year of study. These capstone courses of 15 students often have narrow topics but ask the students to reflect and utilize the knowledge they have gained across their collegiate career. It is important to note that -at least after the first year -there is a wide variety of course offerings, and thus, one should not expect students completing the sequence to be exposed to the same content. The courses do, however, have a similar emphases and overall curricular goals. Broadly, the courses aim to help students becomes scientists that are not only competent in the lab or field but can also recognize the social ramifications of their work and can communicate those ramifications to non-scientists. Thus, one of the central goals of the HPS sequence is to ensure that students acquire successful strategies for effective research, writing, and self-expression. Extensive writing instruction and practice are built into the HPS sequence. The forms of writing may vary but include traditional essays (with opportunities to edit drafts with peers and revise after consultation with an instructor), keeping a journal of personal experiences with scientific issues, or even writing podcast scripts and digitally recording them. At the same time, these HPS courses challenge students to consider the rational and cultural forces that affect the practice of science. It is typical for faculty to develop experiential learning activities to achieve this aim. For example, students in the introductory course may examine the question "What is science?" and explore the demarcation problem. To do so, students engage in a "black box" activity (based on Hardcastle & Slater, 2014) where students investigate the contents of a box that they cannot open. Students must create the rules for the investigation (e.g. what constitutes opening the box, what methods of investigation are permissible), determine whether they should be able to work together (and how), and discuss how their chosen rules for investigation would impact other scientific practices. The third year "Science and the Public" course has utilized an activity (see Charenko & Louson, 2019) where students role played different stakeholders responding to an environmental disaster reminiscent of the fallout of Chernobyl in Britain. One capstone course (pre-COVID) employed the boardgame Pandemic: Legacy to imagine a world facing a global pandemic. The class explored the ethics of triage, the role of institutions in protecting health, and public trust in vaccines, among other topics. 1 While the HPS sequence is often presented to students as enhancing science communication, it is not the only way these courses are described within LBC. LBC sometimes frames these courses as ethics courses -where students learn to be good scientists -or frames them as critical thinking courses. Survey methodology: Assessing student engagement in HPS courses The HPS experience survey was made available to students within LBC at the beginning of their capstone courses in the HPS sequence in Fall of 2020. A link to the optional survey was sent to members of each course section at the discretion of their instructor. Courses in Fall 2020 were fully online due to COVID-19 restrictions, though the experiences that the students reflect upon were almost entirely traditional in-person interactions. There is no evidence that COVID-19 restrictions significantly altered results. The survey was anonymous, optional, and no personal identifiable information was gathered. Students were informed before consenting to the survey that results might be used to help students design projects in one section of the capstone course, be used in other forms of college analysis, or for purposes and research beyond LBC. The results were kept by the survey creator and not shared among instructors. The University's Institutional Review Board Office judged that this research did not require IRB approval and that it complies with the relevant federal regulations beyond those under the purview of the IRB. The survey consisted of three sections: a demographic section, a section asking about overall student experience in LBC, and a section addressing experiences in HPS classes. The HPS section of the survey, which will be the focus here, asked three types of questions: (1) multiple choice questions on student educational background, (2) 5-point Likert scale questions regarding the student's perception of LBC, its HPS courses, and the influence of such courses on the student and (3) open ended response questions regarding student experience in LBC and its HPS courses. Open-ended responses were coded following an inductive approach in order to analyze students' perceptions regarding: 1) what was valuable; 2) skills gained; and 3) obstacles presented in HPS courses. The HPS section of survey is provided in Figure 1 and Table 1 in the appendix. In Fall 2020 there were 165 students enrolled across 12 sections of the capstone course; 52 members of this group responded to the survey with 36 students providing demographic information. The demographic information revealed that 88.6% (31) of respondents had completed three years of university education, 8.6% (3) had completed four years, and 2.9% (1) had completed one year (35 rather than 36 students responded to this question). The HPS section of the survey received 51 responses for each Likert-scale question. Of those responding, 49% (25) students had taken three HPS courses at the time of the survey, 31.4% (16) had taken two HPS courses, 11.8% (6) four, 3.9% (2) one, and five and more than six courses were each selected by 2.8% (1) of respondents. Those indicating they took less than three courses were likely exempt from the introductory HPS course due to university credit earned in high school (usually through advanced placement English classes) or were taking the fourth course in the sequence early or at the same time as the capstone due to scheduling conflicts. While 98% (50) of those surveyed were not HPS majors, one individual was, and this is likely responsible for the single >6 response. Though the focus in this paper is on students who are STEM majors, this response was not removed from the sample because it is possible that this respondent is a double major in a STEM field in addition to HPS and it is unlikely that their original intention in joining LBC was to major in HPS. Surveyed students find the inclusion of HPS courses in a STEM curriculum to be a valuable addition to their education, but HPS courses were not a deciding factor in their choice of program When asked whether, in retrospect, students viewed the inclusion of HPS courses in their curriculum as valuable additions, a majority (51%) of those surveyed strongly agree, with 78.5% agreeing or strongly agreeing. However, the inclusion of the HPS sequence did not typically motivate students to join LBC. The most common response when asked if the inclusion of HPS courses influenced student decisions to attend LBC was "not at all influential" (40%). On the non-HPS section of the survey, students repeatedly explained that their motivation to attend LBC came from access to a more challenging science curriculum with smaller class sizes. That arts courses are offered alongside science courses is mentioned only sparingly. For example, though one respondent claimed "A college that represented a mixed focus on both the humanities and science seemed like exactly the thing I'd enjoy" and another "I wanted a better understanding of how science came to be how it is today," there were far more offering some variant of "I like the concept of the smaller classroom sizes and I heard the teaching and academics are superior to the rest of the school." That HPS -one of the unique characteristics of LBC -is infrequently a factor motivating enrollment might indicate that students come to see the value in HPS over time. That is, students are initially neutral towards HPS courses, do not know what they are, or find HPS courses to be a burden or unnecessary to their education. Evidence for this interpretation is given in the open-ended responses. Students repeatedly advise future students to be open minded about HPS, which may reveal that students were initially not open minded and resistant to these courses. For example, one student writes "Go in with an open mind and find out stuff about your opinions and ideology you never knew was there" while another claims "HPS is not as big of a hassle as it seems." Some students mention HPS courses when asked how LBC exceeded their expectations. One student wrote, "I didn't come into LBC with expectations about the integration of science and the humanities (I thought it was a cool feature, but didn't care too much about it), but ended up loving HPS classes and the way they've broadened my worldview." While students may not have initially perceived HPS courses as meaningful additions to their educations, after having experiences with these courses, they grow to like them. In fact, a few students directly reference a changed perception of HPS; for example, "My feelings for LBC have not changed very much since Freshman year, besides developing a greater appreciation for HPS." There is little indication that if resistance was present, that it persists. Surveyed students enjoyed taking HPS courses and they indicate that they gained skills and interests that they otherwise would not have Results of the survey clearly indicate that a majority of students surveyed enjoyed taking HPS courses (54.9% strongly agree, 25.5% agree). A majority of students also indicated that they had developed interests or skills that they otherwise would not have (47.1% strongly agree, 23.5% agree). The skills that students mention gaining vary. For example, communication skills (including "communication", "reading", "writing", and "listening") were frequently mentioned, especially when students were asked to share one specific skill they learned. As one person commented, they learned "Communication and how to speak up when I have an opinion", while other students wrote "The ability to quickly skim through a text while retaining the most valuable information" and "how to write research papers." Many answers combine various elements of the framings -communication, critical thinking, and ethics -for HPS employed by LBC. One student learned to "Critically understand the author's argument" while another learned "how to write an argumentative ethics paper." It is tempting to draw the conclusion that the major benefit to students of HPS courses are in fact enhanced communication skills. However, in this case one should be cautious: LBC frequently frames HPS as an opportunity to develop the communication skills needed to become a well-rounded scientist. An epistemically safer claim would be that framing the value of HPS in terms of enhanced communication can successfully resonate with STEM students. Students see other benefits as well. Across the open answer questions, the opportunity for personal growth and the gaining of perspective are often mentioned as what other students should know about HPS, indicating that these skills are highly valued. Several students mention that HPS provided "eye opening" experiences while others comment that they got to "hear a lot of different perspectives." This desire to widen one's perspective seems to be echoed in the suggested additions to the HPS curriculum. Frequently mentioned topics included systematic racism, social (in)justice, mental health, and gender equality. These topics are somewhat unsurprising suggestions, as Black Lives Matter protests and the #MeToo movement were receiving significant attention around the time of the survey. Still, these suggestions may signal a desire for STEM students to examine the role of their chosen disciplines in present day social issues. It seems that STEM students want to become "well-rounded" scientists. Surveyed students generally felt that their HPS courses were easier than their non-HPS courses, of equal or lesser importance, and they put more effort into their non-HPS courses Studies in higher education tend to link engagement with positive learning experiences (Carini et al., 2006;Kahu, 2013;Trowler, 2010). In that light, the responses comparing HPS to non-HPS courses are encouraging. The survey suggests that HPS and non-HPS courses may be on par, or at least, STEM students are unsure of how to gauge their relative importance: students most frequently select neither agree or disagree (33%) when asked whether non-HPS courses are more important than HPS courses. More students agree or strongly agree (49.1% combined) than disagree that their non-HPS courses are more important, but that is expected given that they major in STEM. The same reasoning can be applied to students' effort in such courses: students indicate they are more likely to put more effort into their non-HPS courses (41.3% agree, 19.6% strongly agree). One would expect this, especially since a majority of students indicate that they find HPS courses slightly easier (49% agree that HPS courses are easier, 19.6% strongly agree). LBC draws distinctions between HPS and STEM courses, with the HPS faculty forming its own disciplinary group. It would be interesting to see if abolishing the distinction between HPS courses and science courses would result in changes of opinion (as Smith, 2017 seems to suggest). It also cannot be ruled out that the perceived easiness of HPS is linked to students' favorable perceptions of them, a correlation known to plague student evaluations of teaching. This influence seems somewhat unlikely, given that the students in the survey have had a mix of different HPS professors and still report the classes being valuable. Obstacles for STEM student engagement in HPS courses include reading and writing assignments and a mismatch between course topics and student interest One potential obstacle to engagement is assigning to STEM students the kind of reading and writing assignments that are often found in humanities courses. Several of the students who found HPS classes hard or not valuable mentioned the high demands on their reading skills. 2 At least one student who indicated that their HPS courses had little value suggested that the assigned readings were "not needed." Other students warned future students about the "dense" readings, while another commented that some readings led to "no engagement at all because the assigned reading wasn't interesting/relevant for science majors. It turned more into an English class because the professor talked more about rhetoric and symbolism." Anecdotally, my experience suggests that STEM students in an HPS class often struggle to read texts that would be standard reading in a philosophy class of the same (or even lower) level. Clearly, one way to avoid this obstacle is to carefully choose readings. One might also rely on other kinds of media; many in LBC regularly assign podcasts or movies in lieu of a written text. At least one student approved of this approach: "I really enjoyed the use of multi-media (podcasts, books, etc.) in my previous courses." STEM students might also be somewhat inclined to discount their writing abilities. At least one student connected this fear of writing to their performance in HPS: "I think that the HPS classes were hard for me because I really don't like participating in class and I am an awful writer. So I always struggled when having to take the class which made me hate the classes because I wouldn't do the best in them. I had a little anxiety when I had to go to the class." In my experience, this sentiment about writing (and participation) is often mentioned by STEM students. In addition to the reading and writing assignments, another obstacle is alignment between course topics and student interests. Since students may select three of the four classes in the sequence from many different course offerings, there is significant variability in student experience. This made drawing conclusions from open-ended questions about specific course topics difficult. However, one apparent theme is that a mismatch between student interests and the content of the course can be an obstacle to engagement. For example, one student who was not interested in becoming a doctor despised how "every course has a medicine section somewhere and frankly I could not care less about it at this point." Another student studying biology was "hugely disappointed that my SENIOR SEMINAR for my BIOLOGY degree is a physics/theater/feminism discussion with hours of work outside of class that doesn't relate to my degree…". Others commented that they were unable to disagree with the professor or be uninterested in a topic, with one saying they felt pressure to write what the professor wanted on their essay. While such responses are merely suggestive, if one were designing an HPS curriculum for STEM students, the topical relevance of the content is an issue that deserves considerable attention. If HPS courses were targeting an audience with a diversity of scientific interests, it might suggest that courses adopt a survey approach rather than concentrate on specific topics. Survey limitations There are certain limitations to this research that should be noted. The number of respondents is small, which limits the power with which conclusions can be drawn. The variety of courses on offer is large, and thus, the HPS content students have encountered may vary substantially. This makes drawing inferences about which content is effective for overcoming resistance, or what content should be changed, very difficult. The dependability of the survey questions is also unknown, as limited access to the population of interest has prevented pilot tests and validation. The use of 5-point Likert scale questions brings with it some ambiguity, for example, the middle "neither agree nor disagree" option could indicate that the respondent is neutral, does not know, or agrees with a strength of three out of five. The survey population is also somewhat atypical, in part because they self-select into a college that has an HPS sequence and whose classes are perceived as more difficult than those standardly offered by the university. It should be noted that because students self-select into LBC, they can also leave LBC at any time and continue with the same major at the university. It is possible the HPS Experience Survey systematically fails to capture the opinions of students with significantly unfavorable views of HPS courses, because they left LBC before the capstone course where the survey took place. While this is possible, there is not yet evidence to support it. Though merely conjecture, I have not heard of a student leaving because they found HPS to be a burden, and in my experience, the students who excelled in HPS left LBC before the capstone, often to pursue an interest they discovered in their HPS courses within an unsupported major. 3 Lastly, since a vast majority of the respondents to the HPS Experience Survey have completed two or more HPS courses (with the majority completing three or more), it cannot be established that a single class is sufficient to overcome resistance in reluctant students. That students may require time to realize the value of HPS could raise difficulties when integrating HPS into a STEM curriculum if the number of HPS courses is low or restricted to a single course. Conclusion: Overcoming resistance and turning obstacles into advantages There is a perception that STEM students are disinterested in HPS and believe the subject has little to offer them. The HPS Experience Survey assessed whether this perception is accurate, and found it is not. After taking HPS courses, students retrospectively perceive value in them; they generally indicate that they learned skills that they would otherwise not have. In fact, the most agreed upon statement among students in the survey is "I enjoy taking HPS classes." The HPS Experience Survey suggests that if STEM students were initially resistant, that resistance is dropped after experiences in HPS courses. At the same time, this analysis offers recommendations for overcoming obstacles to STEM student engagement. STEM students were often dissatisfied with high reading demands in HPS courses. The learning objectives in an HPS course for non-majors may legitimately differ from those for HPS majors, and thus, efforts should be made to assign texts that advance these goals and match student abilities. One should not assume, for example, that a class designed for HPS majors would be engaging or accessible to STEM students. Instructors should consider assigning popular articles, videos, and podcasts -all of which may be more accessible for STEM students -where appropriate. Aligning the subject of this material to student interests is also likely to enhance engagement. Ensuring that the class material is accessible to STEM students -and aligns with their interests -is likely to be helpful when overcoming resistance. That STEM students perceive value in HPS courses helps justify offering them within a STEM curriculum. Furthermore, what may initially seem like an obstacle -students being only interested in STEM -can be turned into an advantage. Student interest in STEM may be used to lower resistance in HPS courses that are required elements of a STEM curriculum. STEM students might initially be skeptical of the value of HPS courses because they perceive them as humanities or social science courses and are forced to take them to fulfill university breadth requirements. However, if STEM students do have a strong preference for classes that engage with science, then it is likely that they would prefer fulfilling such requirements through HPS courses rather than humanities or social science courses that are unrelated to their interests. One might hypothesize that faced with this choice, students would respond well to HPS courses and engage with them relatively deeply while still acquiring the important skills and knowledge that such requirements typically provide. The skills and knowledge may even be more directly applicable to student interests. Integrating HPS as a replacement for non-science related university requirements may somewhat undermine the purpose of a "breadth" requirement, but to many administrators, it may seem like a win-win when proposing the inclusion of HPS in STEM curricula.
6,937
2022-02-26T00:00:00.000
[ "Education", "Sociology", "Engineering", "History" ]
Sorafenib and Mesenchymal Stem Cell Therapy: A Promising Approach for Treatment of HCC Hepatocellular carcinoma (HCC) is the fifth most commonly diagnosed cancer and the second most common cause of cancer-related death worldwide. Sorafenib (Sora) is used as a targeted therapy for HCC treatment. Mesenchymal stem cells (MSCs) are applied as a new approach to fight malignancies. Drug resistance and side effects are the major concerns with Sora administration. The effect of using the combination of sorafenib and MSCs on tumor regression in xenograft HCC models was evaluated in this study. Methods and Materials. Human hepatocellular carcinoma cell lines (HepG2) were subcutaneously implanted into the flank of 18 nude mice. The animals were randomly divided into six groups (n = 3); each received Sora (oral), MSCs (IV injection), MSCs (local injection), Sora + MSCs (IV injection), Sora + MSCs (local injection), or no treatment (the control group). Six weeks after tumor implantation, the mice were scarified and tumoral tissues were resected in their entirety. Histopathological and immunohistochemical evaluations were used to measure tumor proliferation and angiogenesis. Apoptotic cells were quantified using the TUNEL assay. Results. No significant difference was found in the tumor grade among the treatment groups. Differentiation features of the tumoral cells were histopathologically insignificant in all the groups. Tumor necrosis was highest in the hpMSC (local) + Sora group. Tumor cell proliferation was reduced in hpMSC (local) + Sora-treated and hpMSC (IV) + Sora-treated mice compared with the other groups. Apoptotic-positive cells occupied a greater proportion in the Sora, hpMSC (IV) + Sora, and hpMSC (local) + Sora groups. Conclusion. A combination of chemotherapy and MSC can yield to more favorable results in the treatment of HCC. Introduction HCC is the fifth most common malignant tumor and contributes to about 800,000 deaths globally per annum [1][2][3]. Only a small fraction of patients with HCC are candidates for curative treatments, such as surgical resection, liver transplantation, or radiofrequency ablation [4]. Although numerous novel strategies have been proposed to treat HCC [5], including cell-based therapies, the disease remains challenging to combat. Sorafenib is the only FDA-approved drug that is administered as the first line systemic therapy in advanced HCC [6,7]. It is a multitargeted molecule that exerts its effect through inhibition of proliferation and angiogenesis of tumor cells via its multi-kinase inhibitory function. However, due to complexity and heterogeneity of the HCC tumor cells, the overall mean survival achieved through sorafenib therapy is less than one year. In addition, considering its adverse effects and being costly, Sora underscores the requirement for other novel therapeutic approaches [8]. Combination therapies with Sora and other therapeutic agents have been therefore suggested to enhance its effectiveness [9][10][11]. MSCs have become an attractive subject of investigation for treatment of HCCs [12,13]. ey are suitable candidates for cancer therapy due to their multipotency and potential to differentiate into various cell lineages [14], immunoregulatory effects [15], and finally their chemotactic properties that allow them to reside in tumor-contaminated regions [16,17]. MSCs also induce their effects through upregulation of several proapoptotic genes and downregulation of various antiapoptotic proteins [18]. ese cells were shown to have the capacity to both engraft in the liver of carcinoma-bearing BALB/c mice and differentiate into hepatocyte-like cells. Furthermore, they can induce tumor cell necrosis [19]. ere exists, however, an opposite view regarding the effects of MSCs. ey may in fact enhance the growth and metastatic potential of tumoral cells [20,21]. Further investigations are required to understand the mechanisms underlying such effects. is study aimed to investigate the antiangiogenic properties of sorafenib and the potential of MSCs alone or in combination with each other to induce tumor apoptosis in a nude mice model of HCC. Reagents. A total of 21.6 mg of Sora powder (purchased from American LC LAB Company) was dissolved in 150 μL DMSO and 850 μL of sterile physiologic serum to obtain 1 mL of the solution containing 21.6 mg Sora, 50 μL of which was equal to our desired dose of 60 mg/kg. Cell Culture. HepG2 cell lines were purchased from the National Center for Biological and Genetic Resources of Iran and cultured in RPMI-1640 media supplemented with penicillin (100 U/mL), 10% fetal bovine serum (FBS), streptomycin (100 μg/mL), and then incubated in standard condition (at 37°C, 5% CO 2 atmosphere, and 95% humidity). Human placenta-derived MSCs were obtained from a single healthy donor [22], cultured in high glucose DMEM media with the conditions mentioned above and used at early passage (3-4). Xenograft Model. Eighteen male athymic nude mice (nu/nu; C57BL/6) aged 6 to 8 weeks were obtained from the Omid Institute for Advanced Biomodels. e applied treatments in this study were approved by the Ethical Committee of TUMS. e mice were housed and maintained under optimized hygienic conditions in an individually ventilated cage system. e average temperature of each cage was 23°C with relative humidity of 65%. Animal feeding was with autoclaved commercial diet and water ad libitum, and triple ethical principles of working with animals including reduction, refinement, and replacement were implemented. For HCC tumor implantation, 1 × 10 7 HepG2 cells were suspended in 100 μL of serum-free medium containing 100 μL Matrigel (Corning: 354277) and then inoculated subcutaneously into both right and left flanks of each mouse. Tumor sites were weekly monitored three times and calculated using Vernier calipers. e volume of tumors was calculated based on a standard formula (length × width 2 × 0.52). When the tumor progressed into an advanced stage, volume of higher than 200 mm 3 , treatment was initiated. e mice were randomly divided into six groups: Sora (60 mg/kg/day) oral, MSC (IV injection), MSC (local injection), Sora (60 mg/kg/ day) + MSC (IV injection), Sora (60 mg/kg/day) + MSC (local injection), and control. Injection of human placentaderived MSC (5 × 10 5 ) in the 2nd and 4th groups was via tail veins and in the 3rd and 5th groups was into the tumor margin, whereas the 6th group (control) received a 50 μL combination solution of DMSO and sterile physiologic serum (with the ratio of 150 to 850, respectively), together with an injection of 100 μL of DMEM in tail veins and another 100 μL in tumor margins. An additional injection of MSCs was given one week later. Sorafenib treatment (once a day) via gavage was initiated 15 days after HCC cell injection. e mice were sacrificed on week 4 postimplantation of tumors, and their tumoral tissues and blood were collected. RNA was later added to both blood samples after isolation of serum and tumor tissues (1 mL per 1000/mm 3 ). After washing with physiologic serum, tumor samples were transferred to and kept in formalin buffer. Analysis of Biochemical Factors. Blood samples were collected from the mice and were centrifuged at 800 RCF. To evaluate liver function, serum was extracted and the levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) enzymes and urea were determined. eir levels were measured using an automated biochemical analyzer (Mindray). Histopathological Study. To evaluate the effect of treatment on the histopathological features of tumor sections, the mice were euthanized and the tumoral tissues were dissected on day 28 after treatment and fixed in 10% neutral buffered formalin and finally processed and embedded in paraffin. e embedded paraffin samples were sectioned in 5 μm thickness and stained with hematoxylin and eosin (H&E). e histological sections were blindly evaluated by an expert pathologist under light microscopy (Olympus, Japan) according to the Edmondson-Steiner grading system (1954) [23] for HCC. Furthermore, any histopathological changes such as inflammatory response, necrosis, hyperemia, and hemorrhage were compared in various groups. Evidence-Based Complementary and Alternative Medicine 2.6. Immunohistochemistry (IHC). Immunohistochemical study was done on 4 μm-thick paraffined sections for evaluating the proliferating cell nuclear antigen and angiogenesis using monoclonal primary mouse anti-human Ki67 (Biocare Medical, USA; 1 : 200) and anti-human CD34 antibodies (Biocare Medical; USA, 1 : 100), respectively. e proliferative index was recorded as mean percentage of positive cells by counting the number of positive stained cells among 100 nuclei in five randomly high magnification selected fields, at 200×, using computer software Image-Pro Plus ® V.6 (Media Cybernetics, Inc., Silver Spring, USA). e angiogenesis index was recorded by counting the CD34-positive vessels in five fields at 200× magnification, and the findings were expressed as the mean number of vessels ± standard error of mean (SEM). e stained sections without the primary antibody for Ki67 and CD34 were used as negative control. Terminal Deoxynucleotidyl Transferase (TdT) dUTPNick- End Labeling (TUNEL) Assay. TUNEL assay was used to stain the apoptotic cells undergoing DNA fragmentation [24]. After routine deparaffinization, rehydration, and blocking, the slides were stained with TUNEL using the DeadEnd Fluorometric TUNEL system (Promega) based on the manufacturer's protocol: e mean number of TUNEL-positive cells was recorded for each group under the light microscope. Statistical Analysis. e findings were expressed as mean and standard deviations (SD). e differences between groups regarding biochemical factors were evaluated by oneway ANOVA. All statistical analyses were performed with STATA Statistical Software Release 15.0 (StataCorp. 2017. Stata Statistical Software: Release 15; StataCorp LLC., College Station, TX). e p values < 0.05 were considered statistically significant. Sample Size Calculation. According to an accepted rule of thumb for sample size in animal studies [25], any sample size which keeps E between 10 and 20 should be considered adequate. E � total number of animals − total number of groups. In our research, we used 18 animals in 6 groups, so E � (18−6) � 12; this lies between 10 and 20. Analysis of Biochemical Factors. e mean serum levels of AST, ALT, and urea were all in biologically normal ranges. No significant difference was seen between various groups in terms of these biochemical variables. Histopathological Study. e histopathological evaluation of primary tumors showed a solid pattern composed of thick trabeculae and sheath of tumoral cell that were compressed into a compact mass. We did not find any difference in the grading of tumors (using Edmondson-Steiner grading system) among different groups. And the tumoral cells showed histopathologically high grades (III and IV) or poorly differentiated in all treatment groups. In both hpMSC (IV)-treated and control groups, the numerous pleomorphic tumor giant cells were evident histopathologically (Figure 1, thick arrows). Moreover, different degrees of necrosis were seen in each subject group (Figure 2). e highest severity of necrosis was detected in the hpMSC (local) + Sora-treated mice. ese results showed that Sora alone and in combination with MSC significantly induced tumor tissue necrosis compared with the control group. Although, Sora was able to induce tumor cell necrosis, both local and IV administration of hpMSCs could successfully enhance this effect. e proliferation rate of tumoral cells was determined by analyzing the mean percentage of immunopositive tumoral cells for Ki67 as the marker of cell proliferation in five randomly selected sections. As shown in Figure 3, unlike Sora and MSC alone, co-treatment with Sora and MSC (local or IV) significantly reduced tumoral cell proliferation compared with the control group. TUNEL Assay. e TUNEL assay was utilized to determine whether the administration of Sora and MSC alone as well as the combination therapy of Sora with MSC can inhibit tumor growth by inducing apoptosis in the tumor cells. e number of apoptotic cells was counted in five highpower fields (400 × magnification), and the mean percentage of apoptotic cells was reported. e Sora alone and in combination with MSC (local or IV) showed significantly higher apoptosis-positive cell count than that of the control group (p < 0.01; Figure 4). In addition, the rate of apoptosis in the combination therapy group (MSC + Sora) was significantly higher than that of the Sora alone group. Discussion Available cancer therapies such as chemotherapy, liver transplantation, surgical resection, radiofrequency ablation, immunotherapy, and hormone therapy have different response rates and efficacies due to the vast heterogeneity of HCC [26,27]. Sorafenib is an approved molecularly targeted therapy that is administered for treatment of patients with advanced HCC through its antiproliferative, antiangiogenic, and proapoptotic functions. ese anticancer functions are achieved via targeting some growth factors (GF) such as platelet-derived growth factor receptor (PDGFR), plateletderived growth factor receptor (PDGFR), and rapidly accelerated fibrosarcoma (Raf ) kinases. Systemic treatment with sorafenib can not only improve the overall survival and but delay or inhibit the progression of the tumor; however, the mean survival in this group of patients does not exceed one year and not all patients can tolerate the drug. erefore, targeting HCC with a combination of Sora plus other therapeutic agents would be a reasonable and promising topic of investigation [9,11]. In the last decade, cell therapy with MSC has been shown to be a promising approach due to its properties such as easy Evidence-Based Complementary and Alternative Medicine extraction from various tissues (e.g., adipose tissue, bone marrow, cartilage, umbilical cord blood, and even some solid tumors), fewer ethical concerns, optimal expansion and differentiation into a variety of cell lineages, ability to migrate to injured, inflamed, and cancerous tissues and its immunoregulatory, proregenerative, and antimetastatic effect through production of several GFs and cytokines [13][14][15]28]. Apart from these regenerative effects attributed to MSCs, this therapy has the pitfall of promoting revascularization, which may contribute to the progression of malignancies [29]. In addition, MSCs can release various cytokines that influence tumor angiogenesis; these include VEGF and transforming growth factor (TGF) β1 [30,31]. To investigate the likely mechanisms behind the more favorable outcomes observed when Sora was administered in combination with MSCs, we have taken advantage of apoptotic and angiogenesis markers such as Ki67 and CD34, respectively. e mean serum levels of AST, ALT, and urea as surrogates of liver and kidney function were assessed to ensure the safety of our treatments, none of which showed any significant signs of toxicity. e degree of necrosis was another variable that was compared between treatment groups to confirm the efficacy of combination therapy. Histopathological evaluation using the Edmondson-Steiner grading system also confirmed that combination of MSCs and Sora increases the degree of necrosis and that the necrotic effect of sorafenib alone was higher than that of MSCs against HCC tumors. e IHC analysis was used to assess proliferation, angiogenesis, and apoptotic index in tumor tissues. Ki67 is a marker for detecting cell proliferation and has been demonstrated as a prognostic marker for survival in HCC patients [32,33]. Moreover, the expression of Ki67 is directly proportional to more advanced HCC stages and a poorer differentiation [34]. Here, we have determined the proliferation rate of tumoral cells by counting the mean percentage Ki67-positive cells. e obtained data from IHC showed that the combination of Sora and MSC therapy significantly decreased the proliferation rate of tumoral cells compared with Sora or MSCs alone. Furthermore, the local injection of MSCs showed higher efficacy in inhibiting the proliferation of tumor cells compared with systemic IV injections. Analysis of angiogenesis that is a proliferative factor for metastasis and tumor growth can be quantified by microvascular density (MVD). e MVD is evaluated by immunohistochemical assay using an endothelial marker (CD34) that is widely used for assessment of angiogenesis in HCC [35,36]. Endothelial cells can be derived from human peripheral CD34-positive cells and contribute to angiogenesis in adults [37]. In addition, CD34 is a more sensitive and specific EC marker for detecting of new microvessels in HCC than other commonly used endothelial markers such as CD31 and Von Willebrand's factor (vWF) [38]. We have therefore used CD34 antibodies for this purpose. e higher antiangiogenic effects of sorafenib than that of MSCs are probably due to its inhibitory effect on serine-threonine kinases BRAF and the receptor tyrosine kinase activity of VEGFRs [39]. Furthermore, the results showed that HCC treatment with the combination of MSCs and Sora was more effective in reducing the microvessel density compared with treatment with MSCs or Sora alone, so MSCs, when combined with sorafenib, clearly have antiangiogenic effects on HCC. TUNEL assay was used to assess treatment response in end cells [40]. e data obtained from the TUNEL assay showed that combination of Sora and MSCs (local or IV) significantly increased the proportion of apoptotic-positive tumoral cells compared with the control group and Sora alone ( Figure 4). ese results suggested that the combination of Sora and MSCs can significantly reduce the growth of tumor cells by inducing apoptosis. ese concepts are in agreement with other studies assessing combinational therapy with Sora and other therapeutic agents like gemcitabine [41] for HCC treatment that caused a decrease in cell viability and promotion of apoptosis [42,43]. e antitumor effect of Sora, alone or in combination with other antitumor agents, can be resulted from drug-induced apoptosis [44,45]. Conclusion We conclude that although there is no best way for treatment of HCC, the combination of sorafenib and MSCs has shown a more promising spotlight to achieve more satisfactory results than using sorafenib as a monotherapy. However, more investigations in similar fields would pave the way for an even more extensive clinical trial to take the method to bedside. We propose investigating the variable of drug concentration in the efficacy of such treatment. Future research should also focus on the signaling pathways and the molecular mechanisms involved in both the development of HCC and the effects of MSCs on the progression of tumor cells. Data Availability All data are available upon request to the corresponding author (javad0verdi@gmail.com).
4,037.6
2020-06-14T00:00:00.000
[ "Biology", "Medicine" ]
Impact of Vitamin C on Gene Expression Profile of Inflammatory and Anti-Inflammatory Cytokines in the Male Partners of Couples with Recurrent Pregnancy Loss Immune system disorders and increased inflammation in the male reproductive system can lead to fetal risk in the early stages of development and implantation. Antioxidants such as vitamin C can play a protective role against sperm inflammatory reactions. This study aimed to evaluate the effect of vitamin C on the expression of inflammatory and anti-inflammatory cytokine genes in the male partners of couples with recurrent pregnancy loss. In this randomized clinical trial, twenty male partners of couples with RPL were examined for sperm parameters and expression profile of some inflammatory and anti-inflammatory cytokine genes before and after treatment with vitamin C. There was a statistically significant higher rate of normal morphology and sperm concentration in each patient before and after treatment with vitamin C (p ≤ 0.05). The mRNA levels of interleukin 6 and tumor necrosis factor-alpha were significantly decreased in the sperm of patients after treatment with vitamin C compared to before treatment. In contrast, the gene expression levels of interleukin 4 and transforming growth factor-beta showed a significant increase in the sperm of patients after treatment with vitamin C. Oral daily administration of vitamin C may be effective in the fertility potential of male partners of couples with RPL not only through the improvement of the sperm parameters but also by modulating the expression profile of inflammatory and anti-inflammatory genes. Further studies on protein levels are needed to clarify the role of TNF-⍺ and IFN-γ as a prognostic value in evaluating the recurrent abortion risk in infertile male partners. This trial is registered with IRCT20180312039059N1. Introduction According to the American Society of Reproductive Medicine, recurrent pregnancy loss (RPL) is two or more consecutive miscarriages before 20-24 weeks of gestation [1]. Although distinct risk factors for RPL including genetic factors, immune dysfunction, and autoimmune disorders, hormonal factors, uterine anatomy, infections, and thrombosis have been reported, the etiology of over 50% of RPLs is yet unknown [2]. Recent studies suggest that male partners of couples with RPL who have a normal karyotype may have a high percentage of sperm abnormalities. Male gamete with genetic abnormalities or epigenetic alterations may fertilize the oocyte and severely affect early embryonic development [3,4]. Recent studies showed that abnormal numbers or rearrangements of chromosomes in the male parent lead to higher abortions, reduced fertility, or infertility [5,6]. Sperm parameters may be damaged through aging or effects of drug, immunological, radiation, and environmental factors such as life style, diet quality, physical activity, and infections. One study declared that increased sperm abnormalities or aneuploidy may increase the risk of miscarriage [7]. Furthermore, immunological factors may be involved in sperm abnormalities in male partners of couples with RPL [8]. Evidence suggests that physiological exposure to semen has modulatory effects on the immune system. Seminal plasma includes immunosuppressive factors such as TGF-β, IL-10, and prostaglandin E (PGE) [9]. Semen plasma contains PG and polyamine along with TGF-β, which recruits inflammatory cells in the uterus that suppresses the immune system. Based on this study, TGF-β and activin in seminal plasma may affect the function of cervical immune cells after coitus [10]. Taima et al. showed that the production of inflammatory cytokines by endometrial NK cells is regulated by seminal plasma exposure, indicating immune compatibility between semen and endometrium [11]. Seminal plasma destroys free radicals with antioxidant factors such as vitamin C, carnitine, tyrosine, uric acid, glutathione peroxidase, and pyridoxine [12]. Vitamin C (ascorbic acid) is part of the human diet with various physiological functions that its deficiency is associated with many symptoms such as malaise, fatigue, loss of appetite, petechiae bleeding, purpura, swollen or bleeding gums, corkscrew hairs, and follicular hyperkeratosis [13][14][15]. Vitamin C may play an essential role in the testicular antioxidant defense system and support spermatogenesis [16]. In addition, it has potent anti-inflammatory properties. It has been reported that vitamin C could regulate the inflammatory status by decreasing IL-6 and hs-CRP in obese patients with hypertension and/or diabetes [17]. Male fertility disorders at heavy costs are a global issue, and many studies have been conducted on the effect of antioxidants on mammalian sperm. erefore, the present study aimed to evaluate the effect of vitamin C on the gene expression of inflammatory and anti-inflammatory cytokine as well as sperm parameters in the male partners of couples with RPL. Participants. is randomized clinical trial was designed as an alternative investigation of our previous study that was registered in the Iranian Registry of Clinical with the national ID no. IRCT20180312039059N1 [18]. Twenty male partners of infertile couples with RPL due to male factor infertility were randomly selected among who referred to Yazd Research and Clinical Center for infertility using a random number table. Informed consent was obtained from all participants. In addition to medical records, each patient was asked to complete a questionnaire based on demographic information and inclusion and exclusion criteria. Regarding the exclusion criteria, the questions were adjusted based on consumption or nonconsumption of chemical substances so far. e inclusion criteria of participants were having a history of two or more than RPL, age less than 40 years, sperm concentration of 7-14 million per ml, total sperm motility <40%, and sperm with normal morphology <4% according to the 2010 World Health Organization (WHO) criteria [21]. e men with a history of alcohol use, any consumption of tobacco, antidepressants, and antioxidants, or having obesity (based on body mass index), diabetes, and varicocele disorders were excluded from this study. Also, exclusion criteria were considered for the female partners, such as hormonal imbalance, chromosomal changes, tubal obstruction, and bacterial or viral infections. Each semen samples were evaluated before and after treatment with vitamin C. Briefly, participants were prescribed 250 mg of vitamin C daily in tablet form (Avicenna Company, Tehran, Iran) by a urologist for 3 months [18]. e individuals were recommended to consume fruits and vegetables through the intervention period and not to drink soft drinks, soybeans, canned foods, and even unnecessary use of mobile phones or laptops. Before and after treatment with vitamin C, the sperm parameters and gene expression levels of inflammatory and anti-inflammatory cytokines were assessed. e study was conducted by an experienced laboratory technician who was blind to the allocation of participants. All procedures performed in this study were following the ethical standards of the institutional or national research committee and the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Collection and Analysis of Semen. Semen samples were collected by masturbation after at least 2-3 days of abstinence. e samples were liquefied at 37°C for at least 30 min before semen analysis according to the WHO 2010. e concentration and the motility of the samples were determined under light microscopy at ×400 magnification. e Diff-Quik staining method was used to the assessment of sperm morphology. Cytokine Genes Expression. In this study, relative gene expression methodology including RNA extractions, quality control of extracted RNA, cDNA synthesis, and relative gene expression assessments were performed according to our previous studies [18,20]. Total RNA molecules were extracted from all sperm samples using a total RNA extraction kit (Parstous Biotechnology, Iran), according to the manufacturer's instructions. RNA concentration and quality were evaluated using the spectrophotometer with absorbance at 260 nm (Photobiometer, Eppendorf, Germany). cDNA synthesis of each RNA (100 ng/µL) was performed using the RevertAid First Strand cDNA Synthesis Kit according to the manufacturer's protocol (Parstous Biotechnology, Iran). Specific primers for IL-10, IL-4, IL-6, TNF-⍺, IFN-c, TGF-β, and GAPDH (reference gene) were used for real-time quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR) ( Table 1). qRT-PCR was performed by SYBR Green Master Mix (Amplicon) by using the StepOne system (Applied Biosystems, CA, USA). For each reaction, cDNA (2 μL), forward primer (1 μL), reverse primer (1 μL), the master mix (10 μL), and 6 μL nuclease-free water were adjusted to a total of 20 μL. All the reactions were carried out in duplicate. e qRT-PCR protocol composed of the following: 10 min at 95°C, followed by 40 cycles of amplification stage at 95°C for 15 s, 60°C for 30 s, and 72°C for 30 s. A melting curve stage was run after the cycling stage. e analysis of the data for relative gene expression was conducted by the 2 −△△CT method. Outcome Measurement. Clinical pregnancy was confirmed by detection of gestational sac with a fetal heartbeat. Abortion rate was defined as clinical pregnancy loss before 20 th week of gestation from pregnancies. A live birth was 2 International Journal of Inflammation recorded when a fetus exits the maternal body and immediately shows vital sign. Statistical Analysis. All statistical analyses were performed using the SPSS software for Windows ver. 20.0 (IBM Corp., Armonk, NY, USA). Data were reported as means ± standard error of the mean. e paired t-test was used to analyze the data before and after treatment with vitamin C. P less than 0.05 was considered a significant value. Regarding the gene expression levels of IL-4 and TGF-β, a significantly higher mRNA level was seen between the samples before and after treatment (P � 0.01 and P � 0.02, respectively). However, the relative expression of IL-6 and TNF-α were significantly decreased in the sperm of patients after treatment compared to before treatment. IFN-c was significantly upregulated after treatment (P � 0.002) (Table 2). A significant reverse correlation was seen between recurrent abortion and TNF-⍺ and IFN-c genes expression in sperm cells of infertile men before treatment (r � −0.41, r � −0.39 and p � 0.004, p � 0.01, respectively). Discussion Cytokines play an important role in the human reproductive process by modulating the immune system [21]. Cytokines may be involved in all stages of reproduction and affect pregnancy outcomes [22]. Cytokines such as IL-4, IL-6, and IL-10 seem to cause successful pregnancies, while cytokines such as TNF-α and interferon (IFN)-c are harmful to pregnancy and prevent fetal growth and development [23]. Daher et al. investigated the serum levels of 1 and 2 cytokines in peripheral blood of 29 women with RPL compared with 27 healthy women as controls. ey showed that the levels of IFN-c and TNF-α were higher in women with the RPL group compared to the controls. However, IL-6 and TGF-β were not significantly different between the two groups. ey concluded that 1 cytokines may play an essential role in the pathogenesis of RPL [24]. e present study aimed to evaluate the effect of vitamin C on the expression of inflammatory and anti-inflammatory cytokine genes in the male partners of couples with recurrent pregnancy loss. Vitamin C led to decrease proinflammatory cytokines gene expression (IL-6 and TNF-α) and increase anti-inflammatory cytokines gene expression (IL-4 and TGF-β). However, vitamin C consumption may not affect the gene expression of IL-10. In the present study, IFN-c gene expression was significantly increased. Molina and colleagues conducted an in vitro study that aimed to evaluate the effect of vitamin C on functional parameters of healthy human lymphocytes. e findings showed a significant reduction in IFN-c secretion and a significant increase in IL-10 Data are presented as the mean ± SEM according to the Wilcoxon test. * P < 0.05 was considered as a significant value. IL-10, interleukin 10; IL-4, interleukin 4;TGF-β, transforming growth factor-beta; IL-6, interleukin 6; TNF-⍺, tumor necrosis factor-alpha; IFN-c, interferon-gamma. levels of lymphocyte cells after treatment with 100 μM vitamin C. Aforementioned results were in contrast with our findings that are properly due to the fundamental differences in methodology and study design of two studies [25]. Sanka et al. investigated a set of cytokine levels in seminal plasma in infertile men with or without genital tract inflammation. ey showed that proinflammatory cytokines modulated the oxidative stress and led to reasonable improvements in sperm parameters of infertile men [26]. Studies demonstrated that the immune system function in male infertility plays an effective role in the pathogenesis of sperm cells. For instance, cytokines secreted by various cells of the male reproductive system such as Sertoli cells and Leydig cells are involved in male fertility and may affect steroidogenesis, spermatogenesis, and sperm function [27,28]. A high level of the proinflammatory cytokines, such as IL-1β, IL-6, and TNF-α, causes a decrease in sperm quality by inducing oxidative stress and lipid peroxidation [29]. Chyra-Jach et al. showed that infertility and sperm abnormalities in males with asthenospermia and oligoasthenospermia may be promoted through decreased antioxidant activity, increased cytokine levels, and proinflammatory chemokines in semen [30]. Although Chyra-Jach investigated cytokines at the protein level, the results of this study are consistent with our data, performed at the gene level. e effect of vitamin C on sperm parameters has been reported in several studies [18,31]. In line with our previous study [18], the results of the present research showed that vitamin C supplementation could significantly improve the motility and morphology of sperm in male partners of infertile couples with RPL. Moreover, previous results showed that the pregnancy rate was increased in women with RPL after administration of vitamin C by associated male partners [18]. Regarding our findings, the modulatory effect of vitamin C on the immune system potentially could be one of the main causes of the improving pregnancy rates in these patients. Nazari et al. investigated the effect of vitamin E and zinc on the sperm parameters of 60 couples with RPL. Consistent with the present study, they showed that sperm motility and morphology improved significantly after antioxidants use [32]. Akmal et al. explained that consumption of vitamin C by infertile men may improve the count, motility, and morphology of sperm and may increase pregnancy [31]. Taken together, it seems that vitamin C supplementation may remarkably improve the quality of semen along with the pregnancy rate in men with male factor infertility and a history of RPL. To our knowledge, there is little available evidence-based information about the role of cytokines in recurrent miscarriage, since more clinical trials still require before one supplement could be administered as treatment options for infertile couples with RPL. Also, little information is available about the exact mechanism regarding the relationship between the aberrant expression of the inflammatory/anti-inflammatory cytokines in spermatozoa and RPL. However, according to the literature, Sertoli, Leydig cells, and leukocytes residing in the testis, such as monocytes, macrophages, mast cells, secrete a variety of cytokines that balance inflammatory and anti-inflammatory cytokines [33]. Cytokines play an important role in gap junctions and severely affect spermatogenesis. If the balance between inflammatory and anti-inflammatory cytokines in the testis is disturbed, which leads to an increase in the level of inflammatory cytokines, it could lead to an increase in reactive oxygen species (ROS) in the testicular environment and creating transient openings to pass the spermatocytes following the disruption of the Sertoli cell cytoskeleton [34,35]. In this case, the abnormal cells may get out of transient routs accompanied with the adult cells. In addition, the definitive role of ROS in DNA damage should not be overlooked [18,36]. If abnormal sperms are fertilized with female gametes, they may interfere with the embryo implantation process and cause RPL, which may be one of the reasons for RPLs of unknown cause. It is also recommended more clinical trials with a larger sample size along with control individuals, evaluation of cytokines in different levels (serum, seminal plasma), and the use of semen of fertile men as positive control are recommended in future studies. Conclusion Oral daily administration of vitamin C may be effective in the fertility potential of male partners of couples with RPL not only through the improvement of the sperm parameters but also by modulating the expression profile of inflammatory and anti-inflammatory genes. Further studies on protein levels are needed to clarify the role of TNF-⍺ and IFN-c as a prognostic value in evaluating the recurrent abortion risk in infertile male partners. Data Availability e data used to support the findings of this study are included within the article and are available from the corresponding author upon request.
3,719.8
2022-03-22T00:00:00.000
[ "Biology", "Medicine" ]
Rare and localized events stabilize microbial community composition and patterns of spatial self-organization in a fluctuating environment Spatial self-organization is a hallmark of surface-associated microbial communities that is governed by local environmental conditions and further modified by interspecific interactions. Here, we hypothesize that spatial patterns of microbial cell-types can stabilize the composition of cross-feeding microbial communities under fluctuating environmental conditions. We tested this hypothesis by studying the growth and spatial self-organization of microbial co-cultures consisting of two metabolically interacting strains of the bacterium Pseudomonas stutzeri. We inoculated the co-cultures onto agar surfaces and allowed them to expand (i.e. range expansion) while fluctuating environmental conditions that alter the dependency between the two strains. We alternated between anoxic conditions that induce a mutualistic interaction and oxic conditions that induce a competitive interaction. We observed co-occurrence of both strains in rare and highly localized clusters (referred to as “spatial jackpot events”) that persist during environmental fluctuations. To resolve the underlying mechanisms for the emergence of spatial jackpot events, we used a mechanistic agent-based mathematical model that resolves growth and dispersal at the scale relevant to individual cells. While co-culture composition varied with the strength of the mutualistic interaction and across environmental fluctuations, the model provides insights into the formation of spatially resolved substrate landscapes with localized niches that support the co-occurrence of the two strains and secure co-culture function. This study highlights that in addition to spatial patterns that emerge in response to environmental fluctuations, localized spatial jackpot events ensure persistence of strains across dynamic conditions. INTRODUCTION Microbial communities frequently experience perturbations and spatiotemporal fluctuations in their local environmental conditions [1][2][3][4][5][6]. Such perturbations and fluctuations can have important effects on community stability [7] and can modulate inter-and intra-specific cell-cell interactions [8]. For example, many soil environments experience alternating cycles of wet and dry conditions that can induce changes in community composition by promoting growth during hydrated conditions [9,10] and reducing distances between individual cells that facilitate cell-cell interactions during unsaturated conditions [11]. In coastal environments, tidal dynamics can modify environmental conditions and consequently impose changes on community composition [12] and metabolic activity [13]. On plant leaf surfaces, diurnal fluctuations can modulate resource availability and change the set of available carbon resources, which can again impose changes on community composition and metabolic activity [14]. Finally, in the human gut, changes in dietary conditions can induce changes in the structure and functioning of the gut microbiome [1]. Thus, environmental perturbations and fluctuations significantly influence the ecological and evolutionary processes governing community structure and functioning [6,15]. In populations growing in an unstructured and steady-state environment, the emergence of rare stochastic events, such as the accumulation of beneficial mutations, are referred to as jackpot events [37]. However, in spatially structured populations, the persistence of such a mutation during range expansion requires that the mutation emerge in a favorable spatial position that secures its presence at the expansion edge [38]. This is particularly relevant for sessile growth of microbial colonies, where small populations expand into adjacent unoccupied space and growth is confined to a thin layer of cells at the expansion edge [32]. In the absence of environmental perturbations, microbial communities undergoing range expansion show a decrease in diversity with only a few lineages persisting at the expansion edge [39][40][41]. Laboratory and in silico experiments demonstrated that stochastic processes [32,36] and mechanical forces acting between cells [42,43] in combination with initial spatial positioning [24,36] can control the dynamics of diversity loss during sessile microbial range expansion. Further investigations demonstrated the importance of initial spatial positioning when sustained by the local substrate landscape, thus leading to the establishment of successful lineages at the expansion edge [44]. In other words, the presence of a specific cell-type at the expansion edge may result from stochastic processes that do not require beneficial mutations. We use the term "spatial jackpot events" to emphasize the importance of favorable initial spatial positioning [38] to position cell-types at the expansion edge while the metabolic strength guarantees their stable position at the expansion edge during environmental perturbations. Although spatial selforganization during range expansion has been frequently studied under steady-state conditions (e.g., stable redox conditions), further attention is required to understand how environmental perturbations and fluctuations affect microbial interactions and spatial self-organization. In this study, we investigated the stability of a cross-feeding microbial co-culture under fluctuating environmental conditions. We hypothesized that temporal fluctuations in environmental conditions that alter the nature of interspecific interactions can lead to irreversible transitions in spatial patterns of cell-types, thus affecting co-culture composition and metabolic functioning. Our hypothesis is based on the following two assumptions: (1) environmental conditions that foster different types of interspecific interactions promote the formation of different patterns of spatial self-organization, and (2) the patterns of spatial selforganization that emerge under one set of environmental conditions can alter co-culture composition, spatial self-organization, and functioning under a different set of environmental conditions. The above assumptions are not met if spatial jackpot events emerge that enable cell-types to maintain a stable position at the expansion edge. We tested this hypothesis with a microbial co-culture that satisfies both of the above-mentioned assumptions. The component strains engage in competition under oxic conditions and mutualistic cross-feeding of the conditionally toxic metabolite nitrite (NO 2 − ) under anoxic conditions (Fig. 1a). Oxic and anoxic conditions promote the formation of fundamentally different patterns of spatial self-organization [19] (Fig. 1b, c), satisfying the first assumption discussed above. In addition, we predict that the patterns of spatial self-organization that emerge under oxic conditions are detrimental to the co-culture as a whole under anoxic conditions, satisfying the second assumption discussed above (Fig. 1d). Briefly, anoxic conditions result in a dominance of the nitrite-producing strain (referred to as the producer) at the expansion edge [19,34,45] (Fig. 1d). If the environment changes to oxic conditions, the producer will have preferential access to resources supplied via diffusion from the periphery and will increase in abundance relative to the nitrite-consuming strain (referred to as the consumer). If the environment switches back to anoxic conditions, the increased relative abundance of the producer will result in nitrite accumulation. Over a series of anoxic/oxic transitions, we predict a continual increase in the relative abundance of the producer and the potential accumulation of nitrite to toxic concentrations, thus creating detrimental conditions for the co-culture as a whole. We tested this prediction by repeatedly transitioning the environment between anoxic and oxic conditions and quantifying the effects on co-culture composition and local spatial organization at the expansion edge. MATERIALS AND METHODS Experimental system The experimental microbial co-culture consists of two isogenic mutant strains of the bacterium Pseudomonas stutzeri A1501 [19,34,46,47] (Fig. 1a). The producer has a single loss-of-function deletion in the nirS gene and can reduce nitrate (NO 3 − ) to nitrite (NO 2 − ) but not nitrite to nitric oxide (NO) (Fig.1a). The consumer has a single loss-of-function deletion in the narG gene and cannot reduce nitrate to nitrite but can reduce nitrite to nitric oxide (Fig. 1a). The two strains differ at only single genetic loci [46], thus preventing potential confounding effects that might otherwise occur if more distantly related strains were used. Both strains have an intact periplasmic nitrate reductase encoded by the nap genes; however, this reductase does not support growth with nitrate under anoxic conditions [46,48] and is likely involved with the dissipation of excess reducing equivalents rather than respiration [48]. To avoid recombination between the two strains when grown together, both have a single loss-of-function deletion in the comA gene [46]. Finally, to distinguish the two strains when grown together, each has a chromosomally-located ecfp or egfp fluorescent protein-encoding gene, which encode for cyan and green fluorescent protein respectively ( Fig. 1) [34,46,47]. We imposed different interactions between the producer and consumer by controlling the redox conditions of the environment. When the strains are grown together under anoxic conditions with nitrate (NO 3 − ) as the growth-limiting resource, they engage in a mutualistic nitrite (NO 2 − ) crossfeeding interaction [46]. Nitrite is toxic at low pH, and we can therefore control the strength of the mutualistic interaction between strong and weak by setting the pH of the growth medium to 6.5 or 7.5, respectively. We validated the mutualistic interaction by demonstrating that the biomass of the producer increases when in the presence of the consumer at pH 6.5 (two-sample two-sided t-test; p = 1.3 × 10 −7 , n = 5) (Supplementary Fig. S1). pH itself has no quantifiable effects across this pH range, and there are no experimentally observable effects of nitrite toxicity at pH 7.5 [45,46]. When the strains are grown together under oxic conditions, they engage in a competitive interaction for nutrients, oxygen and space [19]. For simplicity, we will henceforth only use the terminology oxic and anoxic (implying conditions that induce competitive and mutualistic interactions, respectively) and pH 6.5 and 7.5 (implying conditions that induce strong and weak mutualistic interactions, respectively). Range expansion experiments and temporal fluctuations We performed range expansion experiments where we inoculated the surface of a lysogeny broth (LB) agar plate with mixtures of the producer and consumer and allowed the co-cultures to grow and expand across space [34]. To accomplish this, we first grew the producer and consumer alone with oxic LB medium overnight in a shaking incubator at 37°C and 220 rpm. After reaching stationary phase, we centrifuged the cultures at 5488 × g for 8 min at room temperature, discarded the supernatants, suspended the remaining cells in 1 ml of saline solution (0.89% NaCl, w/w) and adjusted the densities of the producer and consumer independently to an optical density of one at 600 nm (OD 600 ). We then mixed the producer and consumer at a volumetric ratio of 1:1 (producer to consumer) and deposited 1 µl of each mixture onto the center of a separate LB agar plate containing 1 mM of sodium nitrate (NaNO 3 ). Because the producer and consumer are isogenic mutants with identical optical properties, equivalent OD 600 values correspond to approximately equivalent cell numbers. Prior to inoculating the LB agar plates, we adjusted the pH of the molten agar to 6.5 or 7.5 as described elsewhere [34]. We imposed transitions between anoxic and oxic conditions for fifteen cycles (n = 4 for each pH condition). We imposed anoxic conditions by placing the LB agar plates inside a glove box (Coy Laboratory Products, Grass Lake, USA) containing a nitrogen (N 2 ):hydrogen (H 2 ) atmosphere (97:3) and oxic conditions by placing the LB agar plates in ambient air. We performed oxygen measurements and confirmed that all available oxygen had diffused out of the LB agar plate within 12 h of transferring the plates back into the glove box ( Supplementary Fig. S2). A single cycle consisted of incubation for 36 h under anoxic conditions followed by 12 h under oxic conditions. This provided the strains with approximately equivalent expansion opportunity under anoxic and oxic conditions. More specifically, the anoxic growth rates of the two strains with nitrate (NO 3 − ) or nitrite (NO 2 − ) are approximately three-fold slower when compared to their oxic D. Ciccarese et al. growth rates [46], and we therefore provided approximately three-fold more time to expand under anoxic conditions than under oxic conditions. Microscopy and image acquisition We obtained tile scans of the range expansions with a Leica TCS SP5 II confocal laser-scanning microscope (CLSM) (Leica Microsystems, Wetzlar, Germany) with a 5x HCX FL air immersion lens, a numerical aperture of 0.12, a frame size of 1024 × 1024, and a pixel size of 3.027 µm [19,34,45]. We set the laser emission to 458 nm for the excitation of cyan fluorescent protein (encoded by the ecfp gene) with an emission range of 480-493 nm and to 488 nm for the excitation of green fluorescent protein (encoded by the egfp gene) with an emission range of 510-559 nm [19,34,45]. We scanned the range expansions at every transition between anoxic and oxic conditions. We set the illumination plane to capture the expansion edge where active expansion and self-organization was ongoing rather than the shallower and non-expanding center. We exposed the agar plates to ambient air for 1 h prior to image acquisition to allow for maturation of the fluorescent proteins [34]. Quantitative image analysis We processed the CLSM images using ImageJ (imagej.net) and MATLAB R2017a (MathWorks, Nantick, MA, USA). We provide a detailed description of the image processing method used in this study in the Supplementary Text and Supplementary Fig. S3. We quantified co-culture composition and local spatial arrangement at the expansion edge using two measurements; the ratio of consumer-to-producer (indicating the relative abundance of the two strains) and the intermixing index (a measure of local spatial organization). We quantified the ratio of consumer-to-producer within a ring located at the expansion edge using a circular windowing approach, where the outer edge of the ring was located at the expansion edge and the inner edge of the ring was located 50 pixels (which correspond to 151.35 µm) behind the expansion edge ( Supplementary Fig. S3). We selected this area to avoid overlap between the focal and previous time points, thus ensuring that time-consecutive measures of the ratio of consumer-to-producer were non-overlapping. We validated our measurements of the ratio of consumer-to-producer by comparing them to those obtained with conventional colony forming unit plate counting (Supplementary Fig. S4). We quantified the intermixing index, which measures the degree of spatial intermixing between the two strains, within a circle located at a radial distance of 50 pixels from the expansion edge as described elsewhere [34,49,50] (Supplementary Fig. S3). Statistical analyses We used parametric methods for all of our statistical tests and considered p < 0.05 to be statistically significant. We used the Wilk-Shapiro test to test Fig. 1 Two-strain microbial co-culture used in this study. a The co-culture is composed of two isogenic mutant strains of P. stutzeri that differ in their ability to reduce nitrate (NO 3 − ) and nitrite (NO 2 − ). One strain can reduce nitrate but not nitrite (referred to as the producer; solid blue horizontal lines) whereas the other can reduce nitrite but not nitrate (referred to as the consumer; solid green horizontal line). The two strains also carry either the ecfp blue or egfp green fluorescent protein-encoding gene. Different patterns of spatial self-organization emerge depending on redox conditions. b1 Anoxic conditions induce a mutualistic interaction and "producer-first expansion", where the producer expands ahead of the consumer. This is because the consumer cannot grow until the producer begins producing nitrite. b2 The community is punctuated by individual "consumer-first expansion" patterns that persist to the expansion edge (referred to as spatial jackpot events). c Oxic conditions induce a competitive interaction and "simultaneous expansion" of the two strains, resulting in segregated sectors with interspecific boundaries lying approximately parallel to the expansion direction. The scale bars are 1000 μm. d In a fluctuating environment, the previous range expansion determines the initial spatial positionings of the strains for the subsequent range expansion, and may thus fundamentally alter spatial self-organization. We predict that repeated transitions between anoxic and oxic conditions will result in a gradual decrease in the ratio of consumer-to-producer, thus potentially leading to the accumulation of nitrite to toxic concentrations. This is due to the preferential spatial positioning of the producer at the onset of oxic conditions. for normality and the Bartlett test to test for homoscedasticity of our datasets and considered p > 0.05 to validate the assumptions of our parametric tests (i.e., we found no evidence that our datasets significantly deviate from the assumptions of normality and homoscedasticity). We reported the type of statistical test, the sample size for each test, and the exact p for each test in the results section. We performed all statistical test using MATLAB R2017a (MathWorks, Nantick, MA, USA). Agent-based mathematical model We simulated co-culture expansion using a model that combines pseudo two-dimensional nutrient diffusion with an agent-based representation of microbial cells with localized growth conditions calculated using Monodtype kinetics. We reported details of the model including all equations elsewhere [44]. Briefly, we created a hexagonal lattice with a side length of 20 microns that we used as a backbone for diffusion calculations in a spherical domain. We used a total diameter of 1 cm based on onedimensional Fickian diffusion between nodes while respecting mass balance at each node. We considered only carbon, nitrate (NO 3 − ) and nitrite (NO 2 − ) in the simulations and we assumed other nutrients are not growth-limiting. We set constant peripheral sources for carbon and nitrate at concentrations of 22 and 1 mM, respectively, where nitrite is produced solely by the metabolic activity of the producer. We represented microbial cells as super agents [51] where each grid node is inhabited by one strain (i.e., the two strains are mutually exclusive). We inoculated the domain with cells at the center (radius of 2 mm) and attributed each node randomly with either a producer or consumer cell. We randomly set the initial mass of each cell to be between 10 and 100% of the mass at division. We calculated microbial growth rates using Monod-type kinetics. Under oxic conditions, both strains consume carbon as the growth-limiting nutrient at the expansion edge, and we therefore only added a carbon limitation term. Under anoxic conditions, we added a nitrate (NO 3 − ) limitation term for the producer and a nitrite (NO 2 − ) limitation term for the consumer. We updated biomass using the explicit Euler method and related nutrient consumption at each grid node to the growth rate using yield coefficients. Nitrite is produced by the producer and consumed by the consumer according to stoichiometry. Nitrite toxicity was shown to primarily influence the growth yield [52]. Thus, we used the following equation to calculate the biomass yield coefficient for each genotype: where Y max is the maximum biomass yield (kg dry weight/mol), Y min is the minimum biomass yield (kg dry weight/mol), and K inh is the nitrite inhibition coefficient (mM). We simulated co-culture expansion through cell division combined with a mechanical cell shoving algorithm. Upon reaching a defined mass at division, microbial cells divide into two, where one daughter cell occupies the current location and the other either occupies an adjacent grid node (if the node is unoccupied or cell shoving is possible) or is aggregated at the current location in a layer above (pseudo three-dimensional colony growth). From the grid node of a dividing cell, the shortest distance to the expansion edge is calculated. If the distance is sufficiently small (<100 µm, 5 grid nodes), then all cells along the shortest path are shoved toward the edge. The current cell at the edge is then shoved to a new node at random from any of the unoccupied neighboring nodes. We set the total simulation time to 72 h using a 60 s time-step. During this time, we alternated the environment between anoxic and oxic conditions every 6 h (i.e., a total of six intervals per condition). In comparison to the experiments, we simulated the growth rates of the strains according to oxic conditions and did not alter the parameters for anoxic conditions for sake of parameter parsimony. Congruent to the experiment, this parametrization enabled equal division during anoxic and oxic conditions. We note that the consumer does have a slightly slower growth rate than the producer under experimental anoxic conditions [34], but accounting for this has no effect on the qualitative outcomes of the simulations. Effects of environmental fluctuations on co-culture composition and intermixing We first tested the effects of fluctuations between anoxic (inducing a mutualistic interaction) and oxic (inducing a competitive interaction) conditions on co-culture composition (quantified as the ratio of consumer-to-producer at the expansion edge) and interspecific mixing (quantified as the number of interspecific boundaries divided by the colony circumference). We expected that, over a series of anoxic/oxic transitions, the ratio of consumer-to-producer at the expansion edge and the degree of intermixing would both decrease (Fig. 1d). To test this, we performed range expansions where we transitioned the environment between anoxic and oxic conditions. While we performed the experiments with defined anoxic and oxic incubation times, our main prediction (i.e., that repeated transitions between anoxic and oxic conditions can induce irreversible pattern transitions that alter co-culture composition and functioning) is independent of the time spent under either of those conditions as far as cells can adjust their metabolism to the new environment (Fig. 1d). The results described above yielded two important outcomes. First, the modeled two-phase linear regression of the ratio of consumer-to-producer and the intermixing index both depended on the strength of the mutualistic interaction, where the initial rate of decay was faster at pH 7.5 than at 6.5 (Fig. 2a, b). Thus, as the strength of the interdependency increases, the decay in the ratio and the intermixing index slows. Second, at pH 6.5 we never observed the complete loss of the consumer from the expansion edge (i.e., neither the ratio of consumer-to-producer nor the intermixing index reached zero) (Fig. 2a, b), which is counter to our initial expectation (Fig. 1d). We further performed controls under continuous oxic and continuous anoxic conditions (Supplementary Fig. S5). The ratio of consumer-to-producer and the intermixing indices both significantly differed between continuous oxic and continuous anoxic conditions regardless of the pH (two-sample two-sided t-tests; p < 0.05, n = 5) (Supplementary Fig. S5). Thus, these two quantities of spatial self-organization depend on the environmental conditions. The ratio of consumer-to-producer and the intermixing indices also significantly differ between continuous oxic and fluctuating conditions, again regardless of the pH (two-sample two-sided ttests; p < 0.05, n 1 = 4; n 2 = 5) (Supplementary Fig. S5). This provides evidence that these two quantities are significantly modulated by periods of anoxic conditions. However, the ratio of consumer-to-producer and the intermixing indices were not consistently significantly different between continuous anoxic and fluctuating conditions ( Supplementary Fig. S5). Thus, periods of anoxic conditions appear to have larger effects on these two quantities than do periods of oxic conditions, which would be expected as anoxic conditions create an interdependency between the strains. The number of spatial jackpot events depend on pH We next tested whether the number of spatial jackpot events that emerge during range expansion depend on the pH, and thus on the strength of the mutualistic interaction. Here, we define a spatial jackpot event as a continuous region of the consumer that persists to the expansion edge. We found that the number of spatial jackpot events was higher at pH 6.5 than at 7.5 (Figs. 3 and 4). We observed mean numbers of spatial jackpot events of 3.5 (SD = 1.3, n = 4) at pH 6.5 and 0.75 (SD = 0.5, n = 4) at pH 7.5, and these mean numbers are significantly different from each other (two-sample two-sided t-test; p = 0.007, n = 4) (Figs. 3 and 4c). Thus, the number of spatial jackpot events is larger at pH 6.5 and slows the observed decay in the ratio of consumer-to-producer and the intermixing index over repeated transitions between anoxic/oxic conditions (Fig. 2). Agent-based model elucidates putative mechanisms for the persistence of spatial jackpot events To provide further support that the number of spatial jackpot events that emerge during range expansion depends on the pH, and thus on the strength of the mutualistic interaction, we simulated range expansions under fluctuating environmental conditions using an agent-based mathematical model (Fig. 4a, b). While the experiments performed in this study reveal the spatial distributions of strains at the population level, the mathematical model captures the growth dynamics throughout the range expansion at the single-cell level and relates observed processes (such as the nucleation of spatial jackpot events and persistence during range expansion) to the underlying growth dynamics and associated substrate landscape. We found that during anoxic conditions, nitrate (NO 3 − ) is consumed by the producer, resulting in the formation of a nitrate gradient with low concentrations at the expansion origin and higher concentrations at the expansion edge (Fig. 5a). During oxic conditions, the producer does not consume nitrate and nitrate diffuses deep into the expansion area, which diminishes or even eliminates the previously established radial nitrate gradient (Fig. 5a, b). This reduces the effect of nitrate limitation and equilibrates the growth rates of the two strains (i.e., there is a less pronounced relative growth rate advantage of the consumer) (Fig. 5a, b). At pH 6.5, nitrite (NO 2 − ) toxicity slows the growth of the producer and prevents nitrite from accumulating significantly (Fig. 5c). In comparison, at pH 7.5 nitrite accumulates to larger concentrations and there is a smaller relative difference in growth rates between the producer and consumer at the expansion edge (Fig. 5d). These underlying processes affect the numbers and persistence of spatial jackpot events during fluctuations between anoxic and oxic conditions. The high relative growth rate difference between the producer and consumer at pH 6.5 fosters persistence of the consumer at the expansion edge (Fig. 5a) leading to a higher number of spatial jackpot events that protrude to the expansion edge (Fig. 4c). At pH 7.5, the absence of nitrite (NO 2 − ) toxicity results in a less prominent growth rate difference between the producer and consumer (Fig. 5b) and thus overall lower numbers of spatial jackpot events congruent with experimental observations (Fig. 4c). Stability of co-culture composition and intermixing during environmental fluctuations We next tested whether a steady-state co-culture composition and pattern of spatial self-organization emerges during repeated transitions between anoxic and oxic conditions. Here, we refer to stability as a lack of change in quantitative measures of coculture composition and spatial self-organization over time. To test this, we quantified two spatial features; the ratio of consumerto-producer (Fig. 6a) and the intermixing index (Fig. 6b). When tracking the two quantities over the 15 anoxic/oxic transitions, we observed that the two quantities evolve toward constant non-zero values with decreasing variance at both pH 6.5 and 7.5. The variance analysis reveals that the ratio of consumer-to-producer at pH 7.5 reaches a constant value more rapidly than at pH 6.5. The constant value emerges after seven transitions at pH 7.5 and after 12 transitions at pH 6.5. The variance in the intermixing index reaches zero after three transitions at pH 7.5 compared to the last transition at pH 6.5. This suggests that the producer is strongly dependent on the consumer when nitrite (NO 2 − ) toxicity is high (pH 6.5), and there are likely stronger benefits for maintaining more balanced ratios of consumer-to-producer and increased intermixing (e.g., the producer advances slowly without the consumer in close spatial proximity to consume nitrite). In contrast, the variance in the ratio of consumer-to-producer and the intermixing index reaches zero earlier at pH 7.5 than at 6.5. This is intuitive, as the producer is less dependent on the consumer when nitrite toxicity is low, and there are therefore weaker benefits for maintaining balanced ratios of consumer-toproducer and intermixing (e.g., the producer can advance without the consumer). Effect of initial environmental conditions Our experiments show that the strength of the mutualistic interaction is an important determinant of the numbers and persistence of spatial jackpot events. However, this outcome could be additionally influenced by the initial environmental conditions. We thus used mathematical modeling to test how the initial environmental conditions shape the final patterns of spatial selforganization by varying the initial redox conditions as well as the availability of growth-limiting nutrients (i.e., by providing nitrite [NO 2 − ] in addition to nitrate [NO 3 − ]) (Fig. 7). When nitrite is supplied together with nitrate, a higher number of spatial jackpot events persist to the expansion edge at both pH 6.5 and 7.5, with the similar trend that a higher number of spatial jackpot events emerge at pH 6.5 than at 7.5 (Fig. 7a, c). The interaction strength is amplified at pH 6.5, where local detoxification of nitrite amplifies the growth difference between the two strains and results in more optimal growth conditions in close proximity to spatial jackpot events (Fig. 7a, c). We further tested whether our results are robust to the initial redox conditions (Fig. 7a, b). When the fluctuations are initiated under oxic conditions, we observed higher numbers of spatial jackpot events persisting to the expansion edge at both pH 6.5 Fig. 3 Formation and persistence of spatial jackpot events during repeated anoxic/oxic transitions. Images are after 350 h of range expansion. a Using reflected light, the surface morphology of the entire expansion area is visible. Transitions between anoxic (mutualistic interaction) and oxic (competitive interaction) conditions are imprinted in the expansion biomass as concentric rings (black arrow). b The transitions between anoxic and oxic conditions (black arrow) are more visible using the bright field. c Detail of the spatial jackpot events that developed during different incubation conditions. White stars indicate spatial jackpot events that did not advance to the expansion edge while the white arrows indicate transitions between anoxic and oxic conditions. d Transitions between anoxic and oxic conditions caused a change in the spatial self-organization of spatial jackpot events. The white arrows indicate a decrease followed by an increase in width. All scale bars are 1000 μm. Simulations of local growth rates and nutrient dynamics during range expansion. Spatially resolved relative growth rates (realized growth rate divided by maximum growth rate) for a strong and b weak mutualistic interactions. Under oxic conditions (competitive interaction), growth rates declined radially from the periphery to the center due to carbon limitation. Under anoxic conditions (mutualistic interaction), growth rates also declined radially for the producer due to nitrate (NO 3 − ) limitation, whereas the consumer benefitted from the ubiquitous availability of nitrite (NO 2 − ). Total nutrient content in the simulated domain for c strong and d weak mutualistic interactions. In comparison to static anoxic conditions, nitrate limitation was less prominent due to diffusion of nitrate into the expansion area during oxic conditions. c For a strong mutualistic interaction, nitrite concentrations were low due to the overall higher relative abundance of the consumer. d For a weak mutualistic interaction, nitrite accumulated within the domain due to a lack of strong nitrite toxicity. e When comparing growth rates between weak and strong mutualistic interactions, the producer has a larger difference in growth rate between the two conditions whereas the consumer has a smaller difference. Fig. 4 Comparison of experimental and simulated patterns of spatial self-organization. At both a pH 6.5 (strong mutualistic interaction) and b pH 7.5 (weak mutualistic interaction), the producer-first expansion pattern dominates the expansion area. However, both pH conditions foster the emergence of spatial jackpot events. c The cyan data points are the numbers of spatial jackpot events that persisted to the expansion edge at pH 7.5. The magenta data points are the numbers of spatial jackpot events that persisted to the expansion edge at pH 6.5. Means are indicated by the gray lines. Congruent to experimental observations, the predicted number of spatial jackpot events in the numerical simulations is higher at pH 6.5 than at 7.5. (mean = 7.75, SD = 1.25, n = 4) and 7.5 (mean = 3.5, SD = 1.29, n = 4). During the initial oxic phase, both the producer and consumer can proliferate, creating small pockets of kin cells. During the subsequent anoxic phase, the small pockets of kin cells have a higher chance of being shoved forward by the producer, and can thus form spatial jackpot events that protrude to the expansion edge. Therefore, regardless the initial redox condition, the strength of the interaction has a strong influence on the final spatial arrangement and number of spatial jackpot events. DISCUSSION In this study, we investigated how fluctuations in environmental conditions that alter interactions between two microbial strains influence the emergence and evolution of spatial selforganization. Using a microbial co-culture consisting of two strains that cross-feed nitrite (NO 2 − ) under anoxic conditions and compete under oxic conditions, we conducted a series of range expansion experiments and complemented experimental observations with insights gained from a mechanistic agent-based model that mimics the experimental conditions. Overall, the emerging patterns of spatial self-organization are consistent with our previous observations of producer-first expansion under anoxic conditions and simultaneous expansion under oxic conditions [19] (Fig. 1b, c). They are also consistent with our expectation that repeated transitions between the two environmental conditions should result in increased abundance and dominance of the producer (Fig. 1d). Contrary to our initial expectation (Fig. 1d), however, we found that the composition of the co-culture is preserved despite repeated transitions between anoxic and oxic conditions (Figs. 2-4). We attribute the stability in co-culture composition and spatial selforganization to the emergence of spatial jackpot events that enable the consumer to remain located at the expansion edge under anoxic conditions, and subsequently secure its position after transition to oxic conditions (Fig. 3). Thus, spatial jackpot events are an important mechanism that enables stable community composition in the face of environmental fluctuations and perturbations (Fig. 6). In essence, spatial jackpot events are a form of local spatial pattern diversity within microbial communities [56]. Thus, just as genetic diversity can provide compositional and functional stability to microbial communities [57][58][59], spatial pattern diversity can also contribute toward compositional and functional stability. Why do spatial jackpot events emerge, and what enables their propagation? The term jackpot event has typically been used in relation to genotypic events, where rare mutations can emerge that enable new genotypes to proliferate and persist [37,38,60]. In our case, spatial jackpot events emerge from a stochastic process that does not have a genetic basis, as we demonstrated via heritability tests and genome re-sequencing analyses in a previous study [56]. Spatial patterns can diversify due to local variations in the initial spatial positionings of individual cells, which results in two different patterns of spatial self-organization that emerge simultaneously [44,56]. The dominant pattern is "producer-first expansion", where the producer expands first and the consumer follows. In this scenario, the expansion edge is occupied by producer cells that rapidly proliferate due to their preferential access to nitrate (NO 3 − ) whereas initially negligible nitrite (NO 2 − ) concentrations result in an exclusion of consumer cells from the expansion edge. The minority pattern is "consumerfirst expansion" (referred to here as spatial jackpot events). During the development of a spatial jackpot event, the producer pushes a few consumer cells forward within the expansion area (Fig. 1b) [44,56]. Our detailed modeling results show evidence for two important mechanisms that facilitate the nucleation of spatial jackpot events ( Supplementary Fig. S6). First, the inoculated consumer cell needs to persist at the expansion edge via shoving by producer cells (Supplementary Fig. S6a). Furthermore, the results suggest that there is a stronger sensitivity to local conditions at pH 7.5, where the consumer benefits from a cluster of adjacent consumer cells supported by a background of producer cells in their vicinity that push the consumer cluster toward the expansion edge ( Supplementary Fig. S6b, c). Once sufficient nitrite is available, the consumer cells that remain near the expansion edge gain a localized relative growth rate advantage due to abundant nitrite (in comparison to the diminishing nitrate availability per consumer cell) that results in the persistence of the observed spatial jackpot event (Fig. 5). In contrast, at pH 6.5 the local growth rate advantage of the consumer does not require a high number adjacent consumer cells in order to nucleate a spatial jackpot event ( Supplementary Fig. S6b, c). Thus, stochastic processes determine the initial spatial positionings of individuals while deterministic processes then act on those individuals to generate a range of spatial patterns as a function of different relative growth rates and behaviors [33,36,[61][62][63]. The difference in the ratio of consumer-to-producer between two subsequent transitions (anoxic/oxic) has a large variance at earlier times and reaches zero (i.e., stability) after seven transitions at pH 7.5 (weak mutualistic interaction). In contrast, the variance reaches zero after 12 transitions at pH 6.5 (strong mutualistic interaction). b The difference in the ratio of intermixing indices between two subsequent transitions (anoxic/oxic) reaches zero after three transitions at pH 7.5 and after 14 transitions at pH 6.5. The solid black lines are the means at pH 6.5 while the dashed black lines are the means at pH 7.5. Previous studies that investigated range expansion in microbial communities highlighted that increasing the strength of a positive interaction can slow the loss of diversity under constant redox conditions [45]. We found that the persistence of the consumer is increased at the expansion edge in the face of environmental perturbations by strengthening the mutualistic interaction. The relative abundance of the two strains and also their intermixing showed comparable outcomes (Fig. 2), where the relative abundance and intermixing are both higher at pH 6.5 than at 7.5. How generalizable are our main conclusions? Fluctuating environmental conditions frequently occur in natural systems such as in soils. Redox fluctuations following intermittent rainfall events, where anoxic conditions rapidly develop in saturated soils while oxic conditions prevail in unsaturated soils, expose soil microorganisms to fundamentally different environmental conditions that affect community composition and function [17]. Thus, our study may be of relevance for understanding the resistance and resilience of soil microbial communities to changes in redox. More generally, the principle that we investigated here may be relevant for any type of environmental perturbation or fluctuation conditional that the two assumptions discussed above are satisfied (i.e., different environmental conditions promote the emergence of different patterns of spatial self-organization and the patterns of spatial self-organization that emerge under one set of environmental conditions are detrimental under other sets of environmental conditions). Fig. 7 Simulations with different initial environmental conditions. a Standard experimental design with initially anoxic conditions and nitrate (NO 3 − ) added exogenously as the growth-limiting substrate. b When expansion was initiated under oxic conditions, more spatial jackpot events emerged due to the initial growth of consumer cells at the expansion edge. c When expansion was initiated with an exogenous supply of both nitrate and nitrite (NO 2 − ), the interdependence between the consumer and producer was alleviated and more spatial jackpot events proliferated to the expansion edge. How widespread are spatial jackpot events likely to occur in nature? We argue that such spatial jackpot events may be typical features of self-organizing microbial communities. When any surface is colonized by microbial cells, individuals will not be distributed uniformly. Instead, colonized surfaces will contain local differences in the initial spatial positioning of individuals. These differences, in turn, can create spatial pattern diversity, where some of the patterns may provide new community-level properties such as resistance or resilience to environmental change. Thus, spatial jackpot events may be widespread and inevitable features of surface-associated microbial communities. DATA AVAILABILITY All data and codes required to reproduce the figures and conclusions are publically available on the Eawag Research Data Institutional Collection (ERIC) repository at the following URL: https://data.eawag.ch/dataset/data-for-rare-andlocalized-events.
9,086
2022-01-25T00:00:00.000
[ "Environmental Science", "Biology" ]
The a-function for gauge theories The a-function is a proposed quantity defined for quantum field theories which has a monotonic behaviour along renormalisation group flows, being related to the beta-functions via a gradient flow equation involving a positive definite metric. We construct the a-function at four loop order for a general gauge theory with fermions and scalars, using only one and two loop beta-functions; we are then able to provide a stringent consistency check on the general three-loop gauge beta-function. In the case of an N=1 supersymmetric gauge theory, we present a general condition on the chiral field anomalous dimension which guarantees an exact all-orders expression for the a-function; and we verify this up to fifth order (corresponding to the three-loop anomalous dimension). Introduction It is natural to regard quantum field theories as points on a manifold with the couplings {g I } as co-ordinates, and with a natural flow determined by the β-functions β I (g). At fixed points the quantum field theory is scale-invariant and is expected to become a conformal field theory. It was suggested by Cardy [1] that there might be a four-dimensional generalisation of Zamolodchikov's c-theorem [2] in two dimensions, such that there is a function a(g) which has monotonic behaviour under renormalisation-group (RG) flow (the strong a-theorem) or which is defined at fixed points such that a UV − a IR > 0 (the weak a-theorem). It soon became clear that the coefficient (which we shall denote 1 4 A) of the Gauss-Bonnet term in the trace of the energy-momentum tensor is the only natural candidate for the a-function. A proof of the weak a-theorem has been presented by Komargodski and Schwimmer [3] and further analysed and extended in Refs. [4,5]. In other work, a perturbative version of the strong a-theorem has been derived [6] from Wess-Zumino consistency conditions for the response of the theory defined on curved spacetime, and with x-dependent couplings g I (x), to a Weyl rescaling of the metric [7]. This approach has been extended to other dimensions in Refs. [8,9]. The essential result is that we can define a functionà byà where A is defined above and W I is well-defined as an RG quantity on the theory extended as described above, such thatà satisfies the crucial equation Here G IJ = G JI ,ρ I and Q J may all be computed perturbatively within the theory extended to curved spacetime and x-dependent g I ; for weak couplings G IJ can be shown to be positive definite in four dimensions (in six dimensions, G IJ has recently been computed to be negative definite at leading order [10]). Eq. (1.2) implies thus verifying the strong a-theorem so long as G IJ is positive. Crucially Eq. (1.2) also imposes integrability conditions which constrain the form of the β-functions and are the focus of this paper. These conditions relate contributions to β-functions at different loop orders. We should mention here that for theories with a global symmetry, β I in these equations should be replaced by a B I which is defined, for instance, in Ref. [6]; however it was shown in Ref. [11,12] that the two quantities only begin to differ at three loops; and in Ref. [13] The (hermitian) gauge generators for the scalar and fermion fields are denoted respectively t ϕ A = −t ϕ A T and t ψ A , A = 1 . . . n V , where n V = dim G, and obey and gauge invariance requires Y a t ψ A + t ψ A T Y a = t ϕ A ab Y b , t ϕ A ae λ ebcd ϕ a ϕ b ϕ c ϕ d = 0. In order to simplify the form of our results, it is convenient to assemble the Yukawa couplings into a matrix and This corresponds to using the Majorana spinor Ψ = ψ i −C −1ψiT . We should mention here that in our present calculations we have ignored potential parity violating counterterms (i.e. containing ǫ-tensors). The analysis of Ref. [6] was recently extended [28] to the case of theories with chiral anomalies, including the possibility of parity violating anomalies. It would be interesting to carry out the detailed computations necessary to exemplify the general conclusions of Ref. [28]. The one-and two-loop gauge β-functions are given by with T A as defined in Eq. (2.4). We follow Ref. [15] in removing factors of 1/16π 2 which arise at each loop order by redefining The one-loop Yukawa β-function is given by 8) and the one-loop scalar β-function is given by The leading terms in the metric G IJ in Eq. (1.2) may be written as [6] where σ is given (using dimensional regularisation, DREG) by [6,20] We emphasise here that y andŷ are not independent; and furthermore, the result of a trace is unchanged by interchanging y andŷ. The lowest-order contributions toà are given implicitly in Ref. [20] as To proceed to the next order, we shall need the two-loop Yukawa β-function in addition to the one-loop scalar β-function in Eq. (2.9). The two-loop β-function is given in general by Refs. [26,27] in the form + tr[c 24 y aŷb y cŷc + c 25 y aŷc y bŷc + c 26 g 2Ĉ ψ y aŷb ] The contributions G y α are depicted in Table (1);Ĝ y α is the transpose of G y α . A solid or open box represents g 2 C ψ or g 2 C ϕ respectively. A box with a letter "A" represents the gauge generator gT A . Note that for each of G y α , there is an alternation between "hatted" and "unhatted" y matrices, as can be seen in Eq. (2.13) for G y α , α = 20, . . . 28. To give a couple of examples, G y 4 represents (G y 4 ) a = λ abcd y bŷc y d , (2.14) and G y 19 corresponds to We present here the results for the coefficients evaluated using standard dimensional regularisation, DREG [26,27]: There are 33 coefficients altogether (counting c 20 and c 28 as three each). We do however have the freedom to redefine the couplings, corresponding to a change in renormalisation scheme; at this order we may consider δy a = µ 1 y bŷa y b + µ 2 (y bŷb y a + y aŷb y b ) + µ 3 tr[y aŷb ]y b This results in a change in the β-function We observe that the redefinitions corresponding to µ 1−4 are not all independent; for instance we may remove µ 4 by redefining This is a general consequence of the form of the redefinition given by Eq. (2.18), which implies that a redefinition δy a = β (1) y a , δg = β (1) g (2.21) has no effect on β (2) y a ; however µ 5 yields an independent redefinition due to the fact that there happens to be no corresponding C φ ab y b term in β y a . It then follows that µ 1−5 and ν 1−3 yield only 7 independent redefinitions; we therefore have 33 − 7 = 26 independent coefficients in the two-loop β-function. Under the change Eq. (2.17) which corresponds to taking δσ = 4 ν 1 C G + ν 2 R ψ + ν 3 R φ in Eq. (2.10). Applying Eq. (1.2), we requireà (4) to satisfy d yà (4) = dy · T (3) yy · β (1) The contributions to dy · T yy · d ′ y at this order are depicted in Table (2). Here a diamond represents d ′ y and a cross dy. As an example, G T 1 represents yy is symmetric up to the order at which we are working. The β-functions β There are no "off-diagonal" fermion-scalar contributions to this order. We parameteriseà (4) as where the different contributions G A α are depicted in Table (3), with a similar notation to Table (2). We have included G A 28 as a reflection of the general freedom to redefinẽ Table 3: Contributions to A (4) in the non-supersymmetric case together with a related redefinition of T IJ ; see Ref. [15] for further details. The purely gdependent contributions toà (4) of course cannot be determined from Eq. (2.23). Eq. (2.23) entails the system of equations where the c α are given in Eq. together with conditions on the β-function coefficients The conditions on T 1−6 in Eq. (2.29) were already derived in Ref. [15]. Reassuringly, the conditions Eq. (2.30) are satisfied by the coefficients in Eq. (2.16), and also by the redefinitions in Eq. (2.19). These six constraints in principle leave only 19 of the 25 independent coefficients in the two-loop β-function to be determined by perturbative computation. It turns out that Eq. (2.23) is sufficient to determine the Yukawa or λ-dependent part ofà (4) up to three free parameters; here are the results for the case of dimensional regularisation: where β 0 is given in Eq. (2.5). Since A 6 only appears in Eq. (2.28) in the combination 4A 6 + 2A 28 , we have set A 6 = 0 in line with Ref. [15]. We note that under the redefinitions in Eq. (2.17), Moreover the effect of these redefinitions on the metric coefficients in Eq. (2.23) (as parametrised in Eq. (2.24)) may easily be computed using Eq. (2.10) as Using Eq. (2.19)), these results are easily seen to agree with Eq. (2.29). It is remarkable that no knowledge of the "metric" coefficients T α is required to determine the A α in this fashion; of course the t i in Eq. (2.31), which define the "off-diagonal" fermion-gauge metric in Eq. (2.23), could be determined by a perturbative calculation if required, as was accomplished for the fermion-scalar case in Ref. [15]. The results in Eq. (2.31) will be used in Sect. 3 in a check of the three-loop β g . In Ref. [15] the extension to three loops was accomplished by first inferring the threeloop Yukawa β-function for a chiral fermion-scalar theory, using the three-loop results derived in Ref. [30] for the standard model, combined with the results for the supersymmetric Wess-Zumino model. Such an approach will not work in the gauged case, unfortunately; the results of Ref. [30] are only for the SU(3) colour gauge group, which of course is not sufficient to determine how the three-loop Yukawa β-function depends on a general gauge coupling. The three-loop gauge β-function The three-loop gauge β-function was computed in Ref. [25] for a general gauge theory coupled to fermions and scalars. In this section we shall show that our result forà (4) is compatible with this result via Eq. (1.2). In fact, our result forà (4) determines the 16 terms in β (3) g with Yukawa couplings up to 4 (see later) unknown parameters. It is rather striking that the two-loop calculation of β λ ) have thereby provided so much information on a three-loop RG quantity. This is an example of the "3 − 2 − 1" phenomenon noted in Refs. [23,31]; namely that the gauge-gauge, fermion-fermion and scalar-scalar contributions to the metric G IJ start at successive loop orders. In our notation, β where the G A α are implicitly defined in Table (3). The purely g-dependent terms are not determined in this analysis. It is then easy to show, using Eqs. (2.26), (2.31), that we can in the form, where β (1) g , β (2) g are given in Eq. (2.5). We notice that T (2) gg agrees with the result for σ in Eq. (2.11). T (3) gg takes the form Unfortunately we have no means of disentangling the separate purely g-dependent contributions inà 4 and in T gg β g , without a three-loop calculation; but all the Yukawa or λ dependent contributions match exactly. If then we would have T IJ symmetric at this order; but as demonstrated in Ref. [15], at three loops T IJ is not symmetric even for a pure fermion-scalar theory for a general renormalisation scheme. Had we not known β g then it would have been determined by Eq. (3.2) up to the four parameters consisting of the two coefficients in T (2) gy and the two coefficients in T The supersymmetric case Here the analysis is extended to a general N = 1 supersymmetric gauge theory, which may in principle be obtained from the general non-supersymmetric theory discussed in Sect. 2 by an appropriate choice of fields and couplings. Such a theory can of course be rewritten in terms of n C chiral and corresponding conjugate anti-chiral superfields, and indeed perturbative computations are enormously simplified through the use of this formalism; moreover, in the light of the non-renormalisation theorem and the NSVZ formula [32,33] for the exact gauge β-function, the renormalisation of the theory is essentially entirely determined by the chiral superfield anomalous dimension γ (at least in a suitable renormalisation scheme). In this section we shall therefore start anew using results derived using superfield methods. Nevertheless, in Sect 5 we show that (at least up two loops) the results obtained using the two approaches match, as indeed they must. The crucial new feature in the supersymmetric context is the existence of a proposed exact formula for the a-function [17][18][19]. This exact form was verified up to two loops in Ref. [20] for a general supersymmetric gauge theory, and up to three loops [15] in the case of the Wess-Zumino model. Moreover in Ref. [15] a sufficient condition on γ to guarantee the validity of this exact result was found and shown to be satisfied up to three loops; related considerations appear in Refs. [18,19], see later for a discussion. In this section we shall generalise this condition to the gauged case and check that it is satisfied up to three loops, using the results of Ref. [22]. The couplings g I are now given by g I = {g, Y ijk ,Ȳ ijk } withȲ ijk = (Y ijk ) * . The supersymmetric Yukawa β-functions are expressible in terms of the anomalous dimension matrix γ i j in the form where for arbitrary ω i j we define We also introduce a scalar product for Yukawa couplings 3 and it is further useful to define The gauge β-function is assumed to have the form where, with R A the gauge group generators, 6) and n V is the dimension of the gauge group. For gauge invariance we must have Under a change g → g ′ (g) = g + O(g 3 ) then in Eq. (4.5) assuming g ′ is independent of Y,Ȳ . For an infinitesimal change δf = f ∂ g δg − δg ∂ g f and δγ = −δg ∂ g γ. The NSVZ form for the β-function is obtained if (4.9) The resulting expression for β g originally appeared (for the special case of no chiral superfields) in Ref. [29], and was subsequently generalised, using instanton calculus, in Ref. [32]. (See also Ref. [33].) We note here that this result (called the NSVZ form of β g ) is only valid in a specific renormalisation scheme, which we correspondingly term the NSVZ scheme. The exact expression generalises one and two-loop results obtained in Refs. [34][35][36]. These results were computed using the dimensional reduction (DRED) scheme; though in any case, the DRED and NSVZ schemes only part company at three loops [39]. The one and two-loop results for γ are given by [37,38] γ (1) = P , where P and S 1 are defined by We use here the notation and conventions of Ref. [22]. In the supersymmetric theory Eq. (1.2) is assumed to now take the form (with a similar equation for dȲÃ). We have written the RHS in terms ofβ g , effectively absorbing the factor f (g) in Eq. (4.5) into T Y g and T gg . We omit potential β Y terms in the first of Eqs (4.12) since are not necessary to the order we shall consider. For N = 1 supersymmetric theories there is, at critical points with vanishing β-functions, an exact expression for a [17] in terms of the anomalous dimension matrix γ or alternatively the R-charge R = 2 3 (1 + γ). Introducing terms linear in β-functions there is a corresponding expression which is valid away from critical points and this can then be shown to satisfy many of the properties associated with the a-theorem [18], [19]. For the theory considered here, with n C chiral scalar multiplets, these results take the form whereβ g is given by Eq. (4.5) and we require (4.14) For the remainder of this section we omit for simplicity the term involving H in Eq. (4.13); but return to it in Sect. 5. In Refs. [18] and [19] Λ, λ are Lagrange multipliers enforcing constraints on the R-charges. At lowest order the result for Λ and also the metric G obtained in Ref. [18] are equivalent, up to matters of definition and normalisation, with those obtained here. The general form forà given by Eq. (4.13) was verified up to twoloop order (for the anomalous dimension) in Ref. [20]. Λ may be constrained by imposing Eq. (4.12). Then We also have Hence if Λ, λ are required to obey where making the indices explicit Θ • dȲ → Θ i j,klm dȲ klm and θ → θ i j , Eq. (4.13) then satisfies Eq. (4.12) if we take Here T gY = 0. However from Eq. (4.14) which may be used to write Eq. (4.18) in equivalent forms with non-zero T gY . A related result to Eq. (4.17), with effectively Θ, θ = 0, is contained in Ref. [18] and also discussed in Ref. [19]. For supersymmetric theories, satisfying Eq. (4.17) is consequently essentially equivalent to requiring Eq. (4.12), although terms involving Θ are necessary at higher orders. However, the work of Refs. [18,19] implies that at least in the pure gauge case, there may be renormalisation schemes in which θ may be set to zero. It is striking that only minor modifications to the condition proposed in Ref. [15] are required for the extension to the gauged case. The condition (4.17) does not fully determine λ, θ since we have the freedom for arbitrary µ. There is also a similar freedom in Λ, Θ. At lowest order Θ, θ do not contribute so that (4.17) becomes and we may simply take from Eqs. (4.10), (4.11) At the next order we require sinceβ g (1) = Q, with Q as defined in Eq. (4.6). We may parameterise Λ (2) and Θ (1) by (4.11). We then find Eq. (4.24) requires, since Hence T gg (2) = 4λ n V g 3 Q . (4.30) As a consequence of (4.20)λ is arbitrary. The computation in Ref. [20] (specialising the DRED version of Eq. (2.11) to the supersymmetric case; and adjusting for the differing definition of the "gg" metric) for T gg (2) fixes 31) in this scheme. At third order we require now in order to satisfy Eq. (4.17) where we write (4). Here a "blob" represents an insertion of the one-loop anomalous dimension. The 3-point vertices alternate between Y andȲ . As an example, G Λ 6 represents a contribution and Eq. (4.4) then implies a contribution to (Ȳ Λ (3) ) of the form Here P, S 1 are given in Eq. (4.11) and S 2 is defined by Similarly we write where the G Θ α are shown diagrammatically in Table ( 5). A term in S 1 is apparently possible in θ (2) but is excluded since there is no contribution to γ (3) involving g 2 QS 1 . As a consequence of (4.20) the resulting equations depend only on 2λ 3 +θ 3 , 2λ 4 + θ 4 . We expand the three-loop anomalous dimension as with and with Q, P , S 1,2 as defined in Eqs. (4.6), (4.11), (4.36). The remainder of the distinct tensor contributions are depicted in diagrammatic form in Table ( 6). The basis for γ (3) is restricted by the absence of one particle reducible contributions such as P 3 , P 2 C R , S 1,2 P , P S 1,2 . Using Eqs. (4.10), (4.25) in Eq. (4.32) leads to a large number of consistency equations which constrain γ (3) . If g = 0 they reduce to which requires These results were obtained in Ref. [15]. The other special case is for Y,Ȳ = 0 when In this case applying Eq. (4.32) with Λ, Θ → 0 it is necessary to require the conditions as well as The relations in Eq. (4.41) were obtained in Refs. [18,19]. We are therefore obliged for consistency to use the result for the anomalous dimension corresponding to this NSVZ scheme. The required transformation was presented in Ref. [39] and its effect on γ (3) given in Ref. [40]. In fact it is only γ 17 and γ 22 which are affected. In the Wess-Zumino case considered in Ref. [15] the existence of an a-function satisfying Eq. (1.2) implied that γ 1 − 2γ 2 − γ 3 was an invariant (in a sense described in Ref. [15]) but did not impose a specific value; thus showing that Eq. (4.17) is sufficient but not necessary. We might expect similar remarks to apply to the other conditions in Eq. (1.2). It is all the more striking that these conditions are in fact satisfied by the anomalous dimension as computed. We may count the independent parameters in the anomalous dimension as we did in Section 2 for the Yukawa β-function. The essential Eqs. and also assuming δβ g is given in terms of δγ in accord Eq. (4.5), for In the g = 0 case, the only coefficient in γ (3) with a κ-dependence, γ 4 , corresponds to a non-planar graph. In the general case there is no such obvious association between non-planar Feynman graphs and coefficients in γ (3) with κ-dependence (evaluated using DRED). However, an intriguing observation is that a redefinition given by choosing where are the contributions corresponding to the Feynman diagrams shown in Table (7). The implication is that there is a scheme in which the κ-dependent terms in γ (3) are generated solely by non-planar diagrams. Reduction of non-supersymmetric results to supersymmetric case In this section we shall check that the a-function obtained using the methods of Section 2 for a general theory is compatible, upon specialisation to the supersymmetric case, with the a-function presented in Section 4 (at least up to two loops). The reduction of the non-supersymmetric theory presented in Section 2 to the supersymmetric case (with n ψ = n V + n C , n ϕ = 2n C ) may be accomplished by writing and with y a ϕ a = y i φ i +ȳ iφ i , where λ is the gaugino field.ŷ i andŷ i may be obtained fromȳ i and y i by interchanging the upper left and lower right 2 × 2 blocks of the 4 × 4 matrices. We also have and consequently, from Eq. (2.6), The scalar potential is now given by In making the reduction from the general theory to the supersymmetric case, we must start from two-loop β-functions corresponding to DRED, since the RG functions used in Section 3 were evaluated using this scheme; as we mentioned earlier, the DRED and NSVZ schemes coincide up to the two-loop order we are considering in this Section. We use the results given in Ref. [42], which may be obtained from the DREG results by a coupling redefinition as in Eq. (2.17) given by These changes are a consequence of making the transformations Eq. (5.6) and also upon A (3) and A (2) respectively in Eq. (2.12). Presumably the transformation in Eq. (5.9) represents a part of the two-loop transformation from DREG to DRED (namely the Yukawa dependent contribution to the transformation of g). To the best of our knowledge this has not been computed in full, though results have been given for the pure gauge case in These coefficients correspond to a three-loop calculation (see Eq. (2.23)) and, in view of Eq. (4.47), depend on the value of γ 17 , which has a different value for the NSVZ scheme than for DRED. It is beyond the scope of this article to consider how Eq. (5.14) would be modified within DRED or indeed within DREG. Since our whole approach is predicated on the NSVZ scheme, it would probably be naive to assume that the DRED form of Eq. (5.14) would be obtained simply by using the DRED result for γ 17 . Eq. (5.11) extends the result of Eq. (7.30) in Ref. [15] (with a = 3α − 1 12 ) to the gauge case-once again, modulo pure gauge terms which are not captured by the methods used in Section 2. We see again the ambiguity in the form ofà expressed in general by Eq. (2.27). Of course this check is guaranteed to work but nevertheless given the indirect manner in which we have obtainedà and the possibility of subtleties regarding scheme dependence, it is satisfying to "close the loop" in this fashion. Finally, we remark that although the form forà presented in Eq. (5.11) is appealingly simple (arguably even more so than Eq. (4.13)), the obvious extension to higher loops does not appear to be viable. Conclusions In this article we have extended the results of Ref. [15] to the case of general gauge theories. In the non-supersymmetric case we have constructed the terms in the four-loop a-function containing Yukawa or scalar contributions, using the two-loop Yukawa β-function and oneloop scalar β-function. Our main result here is Eq. (2.26) with Eq. (2.31). This enabled a comparison with similar terms in the three-loop gauge β-function. In general, as a consequence of the properties of the coupling-constant metric, one can obtain information on the (n + 1)-loop gauge β-function from the n-(and lower-) loop Yukawa β-function and the (n − 1) (and lower) loop scalar β-function. This is reminiscent of the way in which the (n + 1)-loop gauge β-function is determined by the lower order anomalous dimensions in a supersymmetric theory, via the NSVZ formula. In the supersymmetric case we have given a general sufficient condition for the exact a-function of Refs. [17][18][19], given in Eq. (4.13), to be valid, and shown that it is satisfied by the three-loop anomalous dimension. This condition is displayed in Eq. (4.17) and is our main result for the supersymmetric case. One feature of interest is that Eq. (4.17) imposes extra conditions on the anomalous dimension beyond the mere requirements of integrability from Eq. (1.2); but which are nevertheless satisfied by the explicit results as computed. Indeed we remark here (without giving further details since it is beyond our remit in this article on the gauged case) that we have observed similar features in the Wess-Zumino model at four loops, using the results of Ref. [41]. These properties certainly hint that there might be some underlying reason why Eq. (4.17) must be satisfied; it would be interesting to explore this further. If this were indeed the case, one could imagine exploiting Eq. (4.17) to expedite higher-order calculations of the anomalous dimension such as the full gauged case at four loops; possibly combined with additional information such as the necessary vanishing of γ in the N = 2 case. Unfortunately, a preliminary check indicates that these constraints are far from sufficient to determine γ completely, even at three loops; and therefore a considerable quantity of perturbative calculation would still be unavoidable. Finally, in Ref. [15] we explored in some detail the freedoms to redefine the various quantities we have considered, and it would be interesting to extend these discussions to the current gauged case. In particular it would be useful to extend Eq. (4.17), which in its current form is predicated upon the NSVZ renormalisation scheme, to a form valid for any scheme.
6,588.4
2014-11-05T00:00:00.000
[ "Physics" ]
Impact of Samurdhi Program on Poverty Alleviation: An Empirical Investigation of Samurdhi Beneficiaries in Kopay DS Division in Jaffna district The aim of this study is to investigate the impact of Samurdhi Program on poverty alleviation in Kopay DS Division. Two hundred questionnaires were issued to the Samurdhi beneficiaries of Kopay DS division, in Jaffna district, Sri Lanka Such as Kopay north (J/262),Irupalai South (J/257), Urelu (J/267) and Urumpirai south (J/265) divisions. Urelu (J/267) and Urumpirai south (J/265) divisions. Out of which, 177 questionnaires only could be collected. Hence, 177 Samurdhi beneficiary families were incorporated as samples. Correlation analysis and multiple linear regression analysis were used to analyze the data and examine the hypotheses by using the SPSS. The adjusted R2 0.250 for the model implies that approximately 25% of the total variance in poverty alleviation can be determined by all dimensions of Samurdhi program as the independent variable in this model. Further, the model reveals that the remaining 75% of variability was not explained in this model. In this study the findings revealed that there is a significant impact of Samurdhi program on poverty alleviation. Microcredit has a positive and significant impact on poverty alleviation. Livelihood activity has an insignificant impact on poverty alleviation. Welfare has a positive and significant impact on poverty alleviation. Based on the findings the researcher can conclude that Microcredit and welfare activity is effectively worked, and livelihood activity needs to improve itself. Introduction Poverty is a complex and multidimensional social phenomenon. It is widespread and includes a broad, worldwide population, from children to the elderly, and not excluding ethnic minorities. Poverty has been one of the biggest and most challenging problems and obstacles to human development, not only for under-developed or developing countries, but also for wealthier economies, the developed world. Hence, fighting poverty has become a global theme. According to Kesavarajah (2011) poverty is the lack of basic human needs, such as clean water, nutrition, health care, education, clothing and shelter, due to the inability to afford them, so poverty is a major threat to the world. The year 2017 was declared as the year of poverty alleviation in Sri Lanka through the promotion of inclusive growth in keeping with the sustainable development goals of the United Nations. The Department of Samurdhi Development launched a people empowerment program last year targeting to empower 125,000 families selecting nine families from each GramaNiladhari division to achieve the target of no poverty by 2030. Estimates reveal that around 6 percent of the population in the country yet live below the poverty line of earning less than one US$ a day. Statistics also reveal that nearly half of the world's population lives on less than $2.50 a day while over 1.3 billion live in extreme poverty living on less than $1.25 a day. Microfinance is one of the widely accepted instruments or poverty alleviation throughout the world. It has been used in Sri Lanka spanning for over several decades [Ganga et al., 2005]. The Micro Finance Institutions (MFIs) empowering the poor people because they are providing financial and non-financial services to enhance their living standard by providing the facilities for poverty alleviation, health nutrition, education and self-employment opportunities and helping to get capital and independent income and contribute economically to their family and society [Yogendrarajah, 2014]. In her study, she founds that, micro finance provides financial and non-financial services such as small loans, savings, micro leasing, micro-insurance and money transfer to assist the very poor people for their selfincome generating activities. Since independence, successive Sri Lankan Governments and Non-Governmental organization have launched several microfinance programs for poverty alleviation, income generation programs which include the establishment of Thrift and Credit Cooperative Societies, JanasaviyaProgram, SEEDS, Agro micro credit service, National Development Fund and recently the Samurdhi Program [Kumari, 2014]. As a developing country, Sri Lanka has a long history of social programs and food subsidies in particular. The major one of these is the Samurdhi program which was introduced in 1995.Its main goal was to reduce poverty in Sri Lanka through development based on public participation. However, few researchers argue that, Samurdhi as a social welfare Program, is suffering from inefficiency, miss-targeting, and lack of transference [Damayanthi, 2014, Kesavarajah, 2011, Thibbotuwawa et al., 2012. Ismail et al. (2003) investigated that based on the program design, the key components of the Samurdhi include compulsory and voluntary savings, human resource development (productivity development training, training in accounting functions, training of executive committees and material resource development), establishment of Samurdhi Bank societies (responsible for the provision of credit), a community development program, labor-intensive peoples' projects, small industries development and social development Programs. Furthermore, the most of the research has been done in the wide area of microfinance world-wide as well as in Sri Lankan wise, only few studies have been done in Jaffna District and no one has been highlighted on Samurdhi program, particularly in the Kopay Division. Hence, this study attempts to investigate how, as a social assistance program, the Samurdhi program impacts the poverty level. Mainly it focuses on the Kopay DS division in Jaffna District. Furthermore, many researchers have accepted that microfinance is an important tool to alleviate poverty and enhance the living standard of poor people in the developing countries [Addae-Korankye, 2012, Morduch & Haley, 2002. As a developing country Sri Lanka has a long history of micro finance institutions, its services particularly to the poor, and there are a number of reasons that could have contributed for the enormous achievement in poverty reduction, the Samurdhi Program may be one of the major reasons for such achievement. However there is limited Knowledge on the poverty alleviation through the microfinance programs of Samurdhi in Kopay DS Division. Since 39.47% of the total population of KopayDS division has fallen into the category of income is under Rs. 5000 per month as at December of 2018 [SHB, 2019]. Hence, there is a need to identify the poverty alleviation. There is a contradiction whether the Samurdhi program is an effective vehicle to reduce poverty and there are number of criticisms of the Samurdhi program and its implementation. Therefore an evaluation of performance of the Samurdhi Banks is timely. Since this is the major program of governments towards poverty alleviation in Sri Lanka, there is a need to evaluate the program and its implications from time to time. Here researcher could note as per Damayanthi (2014) arguments, miss-targeting, lack of transparency, accountability, efficiency and effectiveness, equity and social justice as well as informed citizenry are some serious governance issues which make an impact on the Samurdhi ProgramIn this research context, researcher has been used poverty alleviation as dependent variable and Samurdhi program as independent variable Thereby, researchers could take those problems and issues mentioned above as gaps. Hence, this study is willing to fill those research gaps and is formulating the following research question as a research problem. "How the Samurdhi program impacts on the poverty alleviation particularly in Kopay DS Division in Jaffna district, Sri Lanka?" To formulate an answer to the above research question, the researcher prepares the following objective. It is to investigate the impact of Samurdhi program on poverty alleviation in Kopay DS Division. Further this study examines the impact of Samurdhi activities such as micro credit, livelihood and welfare on poverty alleviation individually. Focusing on the importance of this issue there are numerous previous literature that have been done on poverty alleviation and Samurdhi program. Based on the empirical evidence the study develops the integration between the basic concepts such as poverty and Samurdhi program. Gunatilaka et al. (1997) states that the word "Samurdhi" is derived from a local term meaning prosperity, and the program comprises a shortand long-term strategy. The short-term strategy involves poverty cushioning components, such as income support, social insurance and social development programs. The long-term strategy involves poverty alleviation through social mobilization, empowerment and integrated rural development. The program claims almost 1 percent of the gross domestic product (GDP) or roughly half of all welfare expenditures, excluding expenditures on education and health, and is the largest welfare program presently operating in the country [Glinskaya, 2000]. Many scholars have documented that the Samurdhi has various components in their studies. As explained by Glinskaya (2000) the Samurdhi program has three major components. The first is the provision of a consumption grant transfer (food stamp) to eligible households. This component claims 80 percent of the total Samurdhi budget. The second component of Samurdhi is a savings and credit program operated through so-called Samurdhi banks, and the loans were meant for entrepreneurial and business development. The third component is rehabilitation and development of community infrastructure through workfare and social (or human) development programs. According to CPI (2017) the key components of the Samurdhi program includes the provision of a food stamp to the eligible households, accounting for approximately 80% of the total Samurdhi budget; A savings and credit program operate through the "Samurdhi banks", with loans destined for entrepreneurial and business development; the rehabilitation and development programsproductivity development training, training in accounting functions, training of executive committees, and material resource development. The Samurdhi program has three major components, consumption grant transfer (Food stamp), saving and credit program and rehabilitation and development of community infrastructure through workfare and social development programs [Hair et al., 1998]. Generally there is no exact definition for poverty as it is defined in different manner. Simply it can be defined as the inability of the people to attain a minimum standard of living. Those people who are unable to satisfy some of the basic needs such as food, shelter, clothes, sanitation, cleaning water, etc., are called poor. One billion people live on less than $1 a day, the threshold defined by the international community as constituting extreme poverty, below which survival is questionable [Ahmed et al., 2007]. The World Bank's mission is to work for a world free of poverty. Punjabi (2010) states that more than subsidy the poor need credit lack of formal employment and poverty makes this strata of society non bankable as they do not have any credit history or documents of employment which forces them to borrow money from moneylenders, and landlords at an exorbitant rate of interest. The poorest people are the vulnerable people who are living without health nutrition, no access in education and their per capita income per day will be below 1 US$ [Rathirani & Semasinghe, 2015]. dimensions of income, household assets and shelter, quality of employment, empowerment, dignity, physical safety, and psychological and subjective well being as multidimensional poverty indices in their study. Rizphy & Jayasinghe-Mudalige (2010) examined the impact of Samurdhi microfinance program on poverty alleviation of farmers in Ampara District. They used 3 indexes, such as women empowerment, livelihood development and income generation and the sum of average value was taken as the value for the poverty alleviation index as the poverty alleviation cannot measure directly. Based on the theoretical frame the conceptual model given in Figure 1 has been developed to represent the relationship between Samurdhi and poverty alleviation. The Samurdhi program consists of microcredit; livelihood activity and welfare activity whereas poverty alleviation is measured by income level, health nutrition, housing condition and asset accumulation. The following model is expressed to investigate the impact of Samurdhi program on poverty alleviation based on the variables used in the study. where β 0 , β 1 , β 2 and β 3 are regression coefficients, M C is micro-credit, W A is Welfare Activity, LA is Livelihood Activity and it is the error term. Previous Studies and Hypothesis development Nowadays some debates are going on about the effectiveness of the Samurdhi program. Effectiveness of the Samurdhi Program has been a substantial national debate during the past decade, and much of this discussion has focused on the effectiveness of its targeting [Gunatilaka, 2010]. Rizphy & Jayasinghe-Mudalige (2010) investigated the impact of Samurdhi microfinance program on poverty alleviation of farmers in Ampara District, and identify the constraints associated with Samurdhi micro credits to the poor. In this study, a questionnaire-based survey was used to collect the data from 60 farmers in the Addalaichanai Divisional Secretariat division. Their findings revealed that the poverty alleviation is significantly affected by the Samurdhi microfinance program by using multiple regression analysis. In addition, they suggested that the inspections of Samurdhi development officers should be made by the Samurdhi authority, to make better improvement through the Samurdhi microfinance program the efficient use of microcredit should be increased. Gunatilaka & Salih (2017) finds that Samurdhi's group savings and intra group credit component and Samurdhi bank program are functioning as important sources of emergency credit for beneficiaries. It also works better in rural areas than in urban areas. Also it is heavily reliant on the income transfer component and it has some constraints such as infrastructure bottlenecks and imperfections in the market for technology. Sanjeewanie et al. (2012) carried out a study with an application of multidimensional poverty data to the policy needs to improve the effectiveness of the national social protection program, Samurdhi in Sri Lanka. For the purpose of this study, data from a pilot survey in the Badulla District were used to compare Samurdhi households with non Samurdhi households in relation to deprivation in multiple dimensions. They also argued that any program aiming to promote people out of poverty needs to be based on a good understanding of the nature of poverty among the target group. The findings of the study were that Samurdhi households are deprived in the dimensions of quality of employment, dignity and psychological and subjective well being, which have practical implications for the design and delivery of Samurdhi. According to Thibbotuwawa et al. (2012) Samurdhi generates a significant impact on household welfare on income, consumption and education, despite the inefficiencies and political interferences associated with distribution of intended services. This study finds out the "Impact of microfinance on household welfare: Assessing the case on the Samurdhi Program in Sri Lanka". For the purpose of this study, Household Income & Expenditure Survey (2006/07) data were used to estimate the impacts of 'Samurdhi' on the status of household income, health, education, and food and non-food consumption. Gunawardane (2014) found out that the Samurdhi credit program plays a major role in empowering women in Sri Lanka. Specifically, the evidence suggests that access to credit for poor women has increased income in their families. Kumari (2014) investigated the impact of Microfinance on small entrepreneurships in Sri Lanka. Her findings revealed that the Samurdhi program is giving priority to develop the income generation programs in the area and it was creating few employment opportunities for village women. Kesavarajah (2011) investigated poverty and Economic support in Sri Lanka. The objective of this context is to shed light on the effects of the government's Samurdhi expenditure on poverty reduction in Sri Lanka. She has reached the conclusion which confirms that the targeting outcomes of Samurdhi are inadequate and Samurdhi transfer program emerges as inefficient program, and also, she found that The Samurdhi Programappears to lack in the checks of accountability and transparency. Samurdhi officers are influenced by the local politicians. Politicization is embedded in the design and influences of both the selection of Samurdhi administrators and the selection of beneficiaries. Further, she has suggested that it is vital to redesign the Samurdhi program and increase the Samurdhi expenditure in a bid to reduce poverty and meet other development goals such as human development and improvement in productivity of workers through improved education and health. According to Damayanthi (2014) Samurdhi Program is suffering from serious governance issues such as miss-targeting, lack of transparency, accountability, efficiency and effectiveness, equity and social justice as well as informed citizenry. She conducted this research to examine the governance issues in government's major poverty alleviation program -the Samurdhi program-in Sri Lanka for the purpose of this study, she used both primary and secondary data. Primary data was collected through questionnaire survey, key informant discussions and focus group discussions in selected eight districts. The quantitative data were analyzed using the simple statistical method and qualitative data and information were analyzed through descriptive methods. In another study Damayanthi (2014) aimed to explore the ongoing issues of mal-targeting in the Samurdhi program and their effects on the actual poor and overall program effectiveness and why errors in targeting occurred in the safetynet and livelihood development components of the Samurdhi program in Sri Lanka, and the subsequent effects on the poor as well as on the program itself. Qualitative methods were used to collect and analyze data, and her findings revealed that, among a number of criticisms on program implementation is mal targeting or lack of proper targeting. Peoples' dependency mentality, politicization of the society, and outdated income level cut-offs were identified as major reasons for mal-targeting. Major outcomes of the maltargeting include disruptions to social harmony and decline in effectiveness of the program. Damayanthi & Champika (2014) attempted to evaluate the performance of Samurdhi Banks in poverty alleviation as well as for identifying the issues and difficulties faced by beneficiaries and officers in eight districts considering district poverty level. The findings show that, approximately 57 percent of the Bank customers' family income had increased due to the Samurdhi Programand it has also contributed 38 percent to increase of assets. As the authors have noted, fifty percent of the bank customers did not face any problem related to service delivery and getting services smoothly. But, among the weaknesses or issues faced by the customers, were that regulated account balance for loan was high, releasing the subsidy allowance was delayed, some of the officers did not provide efficient and effective services and that the maximum loan amount was not enough. Ganga & Sahan (2015) carried out a detailed analysis of Sri Lanka's social protection system and examines the relationship between social protection and labor market outcomes such as the labor force participation and employment status. The study revealed that the value of monthly cash transfers received under many social protection programs including the Samurdhi and PAMA remain low much lower compared to the national poverty line which identifies the minimum level of income required for a person per month to meet his/her basic needs. The study found out that the Samurdhi cash transfer program suffer from some targeting issues of inclusion and exclusion errors, lack of coordination of among programs implemented by different bodies and duplication or multiplicity of programs targeted towards certain vulnerable groups, Budgetary constraints and inequitable distribution of limited resources across programs and population segments. Moreover, the study stresses the need for improving 'targeting' in programs like Samurdhi and make better use of the limited resources available for social protection for the benefit of the 'most needy' groups. Mahmood et al. (2014) explore the impact of microfinance loans on poverty reduction amongst women entrepreneurs in Pakistan. This exploratory study is based upon an empirical investigation of 123 semi structured interviews as well as in-depth, semi structured interviews with a subsample of ten women entrepreneurs who secured microfinance loans for their new or established enterprises. Emergent results show that access to finance is important for female entrepreneurs and helps them realize their potential as entrepreneurs. Toindepi (2016) argues that business priorities of commercial microfinance providers differ significantly to those of development microfinance providers and this impact on the program design which means clients of each regardless of coming from the same target group may have different experiences. The microfinance concept evolved far beyond any single philosophical or ideological confinement that there is now need for formal recognition and acknowledgment that commercial and developmental microfinance paradigms are parallel models of approaches whose continuous evolution is less likely to converge in the near future, so should be treated separately. (2019) explore the option of Shar'ah-compliant microfinance as a viable alternative to many previous approaches adopted by the Nigerian State in tackling the menace of poverty in the land. The findings reveal that the suggested Shar'ah tools are viable and sustainable in lunching microfinance projects in the Nigerian context. Kim et al. (2018) show that technical efficiency (TE) of MFIs in Vietnam is considerably high with the average TE score and efficiency of scale being 85.5% and 94.7%, respectively. Size, age, outreach, and market target of MFIs are found not to be the determinants of efficiency, while capital structure is. Sayvaya & Kyophilavong (2015) find that village development fund program has a positive impact on household income and expenditure but that the impact is statistically insignificant. Atiase & Dzansi (2019) indicate that microfinance has contributed to employment generation and poverty reduction in the Greater Accra region of Ghana through the provision of microloans to necessity entrepreneurs to engage in various types of income-generating activities. However, necessity entrepreneurs faced loan inadequacy issues coupled with under-financing difficulties. Abdul-MajeedAlaro & Alalubosa This study has formulated the following hypothesis as in line with the theory and previous studies in order to examine the relationship between the variables. H 1 : There is a significant impact of Samurdhi Program on poverty alleviation. H 1a : There is a significant impact of microcredit on poverty alleviation. H 1b : There is a significant impact of livelihood on poverty alleviation. H 1c : There is a significant impact of welfare on poverty alleviation. Methods This study examines the impact of Samurdhi program on poverty alleviationin Kopay DS Division.It is based on a positivist paradigm and uses a deductive reasoning in establishing the causes and effects of a thus social phenomenon [Hussey & Hussey, 1997]. The reasoning is deductive [Jayasuriya, 2007] Interest rate repayment Livelihood activity Employment opportunity [Kumari, 2014] Training technical assistance Welfare activity Food stamp [Jayasuriya, 2007] [Sharif, 2005] . Asset accumulation Household/ Business assets [Fatima & Qayyam, 2016] [Damayanthi & Champika, 2014] because the hypotheses are derived first, and then the related data will be collected later to confirm or negate these established hypotheses. Bryman & Bell (2007) indicate that deductive approach is related to quantitative research that follows objectivism, ontological realism, and epistemological positivism. Gill & Johnson (2002) argued that the development of a conceptual and theoretical structure prior to its testing through empirical observation is needed in a deductive research method. As a result, quantitative data was used as the evidence required for testing the hypotheses in this study. The population for this study consists of all Samurdhi beneficiaries in Kopay DS Division in Jaffna District, Sri Lanka. Kopay DS division consists of sixteen villages subdivided into thirtyone GramaNiladhari and three Samurdhi Zones. Out of three Samurdhi Zones only one zone called Kopay Samurdhi bank was selected in which there are four GramaNiladhari (GN) such as Kopay north (J/262) with the total of 427 Samurdhi beneficiaries, Irupalai South (J/257) with the total of 510 Samurdhi beneficiaries, Urelu (J/267) with the total of 579 Samurdhi beneficiaries and Urumpirai south (J/265) with the total of 875 Samurdhi beneficiaries. So the total population of four GramaNiladhari consists of 2391 Samurd-hibeneficiaries. Finally200 Samurdhi beneficiaries in four GN divisions have been selected randomly. 200 Questionnaires were issued but the researcher could collect only 177 Questionnaires and 23 were not responded. Therefore, 177 samples could only be incorporated in this study. In this study the primary data was gathered by using the questionnaire survey in Kopay DS division. The standard questionnaires with tested reliability were used. To examine the hypotheses of the study, the collected data was analyzed by using SPSS. The measurement of variables and concepts is indicated in Table 1. Results and Discussion Following paragraphs intend to answer the research question concerning "how the Samurdhi program impacts on the poverty alleviation particularly in Kopay DS Division in Jaffna District, Sri Lanka?" Firstly, a descriptive analysis of characteristics of the sample is presented. Secondly, the analysis focuses on the correlations between the variables. Thirdly, effects of Samurdhi program on poverty alleviation are examined to answer the research question. Descriptive Analysis Descriptive statistics of the variables included in the study have been presented in the Table 2. As in line with the Table 2, it is quite clear that out of the total respondents investigated for this study, the overwhelming majority (97.7%) of them are females whereas 2.3% are found to be males from 177 samples. It can be concluded that nowadays women are more involved than men in the Samurdhi bank activities. Out of 177 respondents, the majority fall into the age group of 31-40 years old which is 32.2%. It is followed by 30.5% of the respondents who are aged 41-50 years old, 28.8% of the respondents are aged above 50 years, and the rest of the 8.5% are fallen into 18-30. It can be concluded that the people from a family who are in the age group of 31-40 and 41-.50 are mostly involved into the Samurdhi program dealings. This table entails that the majority of 41.8% of beneficiaries engaged into Samurdhi program was 10 years, 29.4% of the clients engaged in 5 years. 12.4% of the beneficiaries are engaged into Samurdhi program for 2-3 years and 9% of the beneficiaries are engaged in 1 year. Further the results revealed that the majority of the clients is in the category of grade 5-10. Moreover 32.8% had attained G.C.E O/L whereas 15.3% of the Multicollinearity Test In this study multicollinearity is measured using Variance Inflation Factor or Tolerance test. As in line with Table 3, all VIF values for variables are less than 10 then there can be concluded that there is no any issue on multi-collinearity. Reliability Test According to Hair et al. (1998) reliability is "extent to which a variable or set of variables is consistent in what it is intended to measure". The questionnaire on this study was circulated based on reliability by using SPSS software with Cronbach's alpha method. Thus the internal consistency of the Samurdhi program and poverty alleviation of this study was tested through Cronbach alpha coefficient. Cronbach's alpha values were assessed for each variable with item-scales. The reliability of the test is reported in Table 4. The reliability of the measures was well above the minimum threshold of 0.60 in every case [Gliner & Morgan, 2000]. Thus, it can be concluded that all of the measures were generally reliable. Correlation Analysis The correlation was made to examine the pattern or strength of the relationship between Samurdhi program and poverty alleviation of Samurdhi beneficiaries in Kopay DS Division in Jaffna district. As per the results presented in the Table 5, microcredit is positively correlated with income level (r=0.451) and health & nutrition (r=0.383) at 0.01 significance level while microcredit is positively significantly correlated with housing condition (r=0.284) and asset accumulation (r=0.277) at 1% significance level. Livelihood activity is significantly positively linked with income level (r=0.244) and asset accumulation (r=0.281) at 0.01 significance level. Furthermore the welfare activity is positively significantly correlated with income level (r=0.202, p=0.007) and asset accumulation (r =0.219, p=0.003) at 0.01 significant level whereas there is significant relationship between welfare activity and health and nutrition (r=0.152, p=0.044) at 0.05 significance level. Regression Analysis The regression analysis was performed to evaluate the impact of Samurdhi program on poverty alleviation which is presented in Table 6. Based on Table 6 the value of the coefficient of determination (adjusted R-Squared) is 0.250 which shows that approximately 25% of the total variance in poverty alleviation can be determined by all dimensions of Samurdhi program as the independent variable in this model. Further, the model reveals that the remaining 75% of variability was not explained in this model. It is observed that the model is a good fit because the significant value (F-statistic) is less than 0.05. Among the all three Samurdhi activities considered in the analysis, only two Samurdhi activities such as micro credit and welfare activities have a significant impact on poverty alleviation while there is not significant impact of livelihood activity on poverty alleviation. Hypothesis H1b stated that there is a significant impact of livelihood activity on poverty alleviation. Table 6 shows that there is insignificant impact of livelihood activity on poverty alleviation (p= 0.809>0.05). So Hypothesis H1b was not supported. This finding is contradicting with Kumari (2014).Meanwhile, the beta value for welfare activity is .144 and p value is less than 0.05. Therefore the welfare activity has a significant impact on the poverty alleviation. Therefore hypothesis H1c was supported with findings. This is contradicting [Ganga & Sahan, 2015]. To sum up the overall result, it can be concluded that the Samurdhi program significantly impacted on poverty alleviation (f value = 20.570, P= 0.000). This is consistent with the findings of Rizphy & Jeyasinghe (2010) and Sanjeewanie et al. (2012). Conclusion This study mainly depicts the relationship between Samurdhi Program and poverty alleviation in Kopay DS Division. This study incorporated the Samurdhi Program as an independent variable which includes microcredit, livelihood activity and welfare activity. Poverty alleviation is incorporated as a dependent variable, which is measured by using Income level, Health and nutrition, Housing condition and Asset accumulation. The aim of this study is to investigate the impact of Samurdhi Program on Poverty Alleviation in Kopay DS Division. Findings of the study can be stated as follows: there is a significant impact of Samurdhi program on poverty alleviation. Microcredit has positive and significant impact on poverty alleviation and livelihood activity has insignificant impact on poverty alleviation while welfare has positive and significant impact on poverty alleviation. Based on the findings the researcher can conclude that microcredit and welfare activity are effectively worked, and livelihood activity needs to improve itself. Limitations and Suggestions There are some limitations. First, there is a dearth of activities considered in the Samurdhi Programme in this study. A lot of activities are carried out under the Samurdhi Programme at village level. Second, the sample size is quite small and restricted to only 200 beneficiaries in the 4 GN divisions in a Samurdhi Zone in the Kopay Division. Third, many factors can affect the poverty alleviation, but in this context of the Samurdhi programme only few factors were considered. The study confirms that the Samurdhi Programme plays a vital role in reducing poverty and calls for the Government to adopt economic policies which aim at developing Samurdhi activities in order to help the poor population by making them exposed to better opportunities of employment and income growth, thereby achieving the goal of poverty reduction. The results found here suggest possible areas for future research also. The area would be the estimation of Samurdhi activities and poverty alleviation relationship using some other poverty indicator (i.e. head count ratio, other-income-based and welfare-based indicators). Apart from this, the study also does not take into consideration the individual issues of rural and urban poverty separately. A promising extension of this work would be to consider the rural-urban poverty reduction and its linkage with Samurdhi activities separately so that policies can be framed with an individual focus on rural as well as urban areas.
7,204.4
2020-12-31T00:00:00.000
[ "Economics" ]
Laccase Immobilization on Poly ( p-Phenylenediamine ) / Fe 3 O 4 Nanocomposite for Reactive Blue 19 Dye Removal Magnetic poly(p-phenylenediamine) (PpPD) nanocomposite was synthesized via mixing p-phenylenediamine solution and Fe3O4 nanoparticles and used as a carrier for immobilized enzymes. Successful synthesis of PpPD/Fe3O4 nanofiber was confirmed by transmission electron microscopy and Fourier transform infrared spectroscopy. Laccase (Lac) was immobilized on the surface of PpPD/Fe3O4 nanofiber through covalent bonding for reactive blue 19 dye removal. The immobilized Lac-nanofiber conjugates could be recovered from the reaction solution using a magnet. The optimum reaction pH and temperature for the immobilized Lac were 3.5 and 65 ◦C, respectively. The storage, operational stability, and thermal stability of the immobilized Lac were higher than those of its free counterpart. The dye removal efficiency of immobilized Lac was about 80% in the first 1 h of incubation, while that of free Lac was about 20%. It was found that the unique electronic properties of PpPD might underlie the high dye removal efficiency of immobilized Lac. Over a period of repeated operation, the dye removal efficiency was above 90% during the first two cycles and remained at about 43% after eight cycles. Immobilized Lac on PpPD/Fe3O4 nanofiber showed high stability, easy recovery, reuse capabilities, and a high removal efficiency for reactive blue 19 dye; therefore, it provides an optional tool for dye removal from wastewater. Introduction Laccase (p-diphenol: dioxygen oxidoreductase, EC 1.10.3.2) belongs to the family of copper-containing oxidases, which catalyze the 1-electron oxidation of a wide range of inorganic and organic substances, coupled with a 4-electron reduction of oxygen to water [1][2][3][4].This enzyme is among the most studied redoxases and has been used successfully as a commercial industrial catalyst.Laccase (Lac) has shown many applications in biomedical, biotechnological, and environmental areas such as organic synthesis, wine cork making, teeth whitening, immunoassay analyte labeling, biofuel cells, and biosensors [4][5][6].Moreover, Lac can also play a major role in bioremediation processes such as dye removal from wastewater [7,8]. As is widely known, the difficulty of recovering free enzymes from solution and the poor stability of many free enzymes, including Lac, have hindered their development for use in large-scale applications [9].Accordingly, immobilized enzymes have greater prospects for development since they have higher storage, thermal, and operational stabilities than their free counterparts.Among the carrier materials commonly used for immobilizing enzymes, nanomaterials have attracted great attention due to their large surface area to volume ratio and high porosity, which are highly efficient for enzyme attachment [10,11].More importantly, the development of various nanostructure carriers for enzyme immobilization facilitates a broader range of applications and an increased efficiency Appl.Sci.2016, 6, 232 2 of 13 of immobilized enzymes [12][13][14].Furthermore, for some redoxases that require an electron shuttle for their catalytic reaction, their activity can be enhanced when these enzymes are immobilized on electron-conducting carriers.It has been demonstrated that carbon nanotubes are an ideal carrier for redoxase immobilization, since they possess superb electrical conductivity and can effectively enhance direct electron transfer between electrodes and proteins [15,16].In addition, magnetic polymer-based nanocomposites for enzyme immobilization have also attracted extensive attention since they have shown great potential for applications in enzyme recovery and recycling [10]. Since Lac is a redoxase, it has been speculated that its activity may be enhanced through an electron-transfer pathway between the carrier and the enzyme when it is immobilized on nanostructured materials with good conductivity.Polyaniline and poly(p-phenylenediamine) (PpPD) nanofibers have a large surface area and a high density of nanopores onto which enzymes can be efficiently absorbed.These nanofibers are also capable of conducting electricity as polymer nanowires.Therefore, they are considered to be excellent candidate materials for enzyme immobilization, especially for redoxases.The use of magnetic and electron-conducting PpPD nanofibers as supports for Lac immobilization has the following obvious advantages: (1) the high specific surface area and functional N-H groups of PpPD nanofiber are suitable for the efficient binding of Lac; (2) magnetic nanocomposites provide a method for easily separating immobilized enzymes from solutions following treatment, thereby lowering operation costs; (3) conductive PpPD nanofibers can provide efficient channels for electron transportation between enzymes and their substrates, which could improve the catalytic activity of Lac. A wide variety of dyes have been extensively used in many industries such as the pulp, leather, cosmetics, food, paper, and textile industries [17].Synthetic dyes have harmful effects on the environment owing to their toxicity to microbes, hydrophytes, and animals.Thus, environmental protection acts have been enacted in most countries, which demand that textile waste must be treated before being released into natural water bodies [18].It has now been proven that Lac can oxidize many synthetic dyes with high efficiency [19,20].It has been reported that the magnetic PpPD nanofiber had been used as a nano-adsorbent for removal of Cr 2 O 7 2− ions or a photocatalyst for degradation of acid dyes [21,22].However, to the best of our knowledge, there have not been any reports concerned with the immobilization of Lac on magnetic PpPD nanofiber for the removal of dye.In addition, reactive blue 19 (RB19) (Figure 1), which is a typical anthraquinone dye, has been widely used in the textile industry and is a representative of an important class of toxic and recalcitrant organopollutants.Therefore, RB19 was selected as a model recalcitrant compound for removal from solution by enzymatic treatment in this study. for enzyme attachment [10,11].More importantly, the development of various nanostructure carriers for enzyme immobilization facilitates a broader range of applications and an increased efficiency of immobilized enzymes [12][13][14].Furthermore, for some redoxases that require an electron shuttle for their catalytic reaction, their activity can be enhanced when these enzymes are immobilized on electron-conducting carriers.It has been demonstrated that carbon nanotubes are an ideal carrier for redoxase immobilization, since they possess superb electrical conductivity and can effectively enhance direct electron transfer between electrodes and proteins [15,16].In addition, magnetic polymer-based nanocomposites for enzyme immobilization have also attracted extensive attention since they have shown great potential for applications in enzyme recovery and recycling [10]. Since Lac is a redoxase, it has been speculated that its activity may be enhanced through an electron-transfer pathway between the carrier and the enzyme when it is immobilized on nanostructured materials with good conductivity.Polyaniline and poly(p-phenylenediamine) (PpPD) nanofibers have a large surface area and a high density of nanopores onto which enzymes can be efficiently absorbed.These nanofibers are also capable of conducting electricity as polymer nanowires.Therefore, they are considered to be excellent candidate materials for enzyme immobilization, especially for redoxases.The use of magnetic and electron-conducting PpPD nanofibers as supports for Lac immobilization has the following obvious advantages: (1) the high specific surface area and functional N-H groups of PpPD nanofiber are suitable for the efficient binding of Lac; (2) magnetic nanocomposites provide a method for easily separating immobilized enzymes from solutions following treatment, thereby lowering operation costs; (3) conductive PpPD nanofibers can provide efficient channels for electron transportation between enzymes and their substrates, which could improve the catalytic activity of Lac. A wide variety of dyes have been extensively used in many industries such as the pulp, leather, cosmetics, food, paper, and textile industries [17].Synthetic dyes have harmful effects on the environment owing to their toxicity to microbes, hydrophytes, and animals.Thus, environmental protection acts have been enacted in most countries, which demand that textile waste must be treated before being released into natural water bodies [18].It has now been proven that Lac can oxidize many synthetic dyes with high efficiency [19,20].It has been reported that the magnetic PpPD nanofiber had been used as a nano-adsorbent for removal of Cr2O7 2− ions or a photocatalyst for degradation of acid dyes [21,22].However, to the best of our knowledge, there have not been any reports concerned with the immobilization of Lac on magnetic PpPD nanofiber for the removal of dye.In addition, reactive blue 19 (RB19) (Figure 1), which is a typical anthraquinone dye, has been widely used in the textile industry and is a representative of an important class of toxic and recalcitrant organopollutants.Therefore, RB19 was selected as a model recalcitrant compound for removal from solution by enzymatic treatment in this study. In this work, we fabricated a magnetically separable PpPD/Fe3O4 nanocomposite for Lac immobilization to increase enzyme loading, improve enzyme recovery and recycling, and create a surface microenvironment that permits electrical conductivity.The effects of pH and temperature on Lac activity and stability were studied.The reuse capabilities of Lac and its removal efficiency for RB19 were also determined.In this work, we fabricated a magnetically separable PpPD/Fe 3 O 4 nanocomposite for Lac immobilization to increase enzyme loading, improve enzyme recovery and recycling, and create a surface microenvironment that permits electrical conductivity.The effects of pH and temperature on Lac activity and stability were studied.The reuse capabilities of Lac and its removal efficiency for RB19 were also determined. Synthesis of Magnetic Nanoparticles Fe 3 O 4 nanoparticles were synthesized according to the method described by Cao et al. [23].Briefly, 1.988 g of FeCl 2 •4H 2 O and 5.46 g of FeCl 3 •6H 2 O in 60 mL deionized water were mixed at room temperature.The mixture was stirred vigorously at 80 • C for 25 min.After that, NH 4 OH (20 mL) was quickly added into the mixture until the pH reached 10.0.After 30 min vigorous stirring, the black precipitate was separated by magnetic decantation.The samples were then washed several times with deionized water until a pH value of 7.0 was obtained.The resulting Fe 3 O 4 nanoparticles were dried at 60 • C under vacuum for 8 h. Synthesis of PpPD/Fe 3 O 4 Nanocomposite In light of the process described by Zhang et al. [24], the nanocomposite of PpPD with Fe 3 O 4 was prepared by in situ doping polymerization in the presence of H 3 PO 4 as a dopant.The overall synthetic pathway for nanocomposite (PpPD/Fe 3 O 4 ) is shown in Scheme 1.The synthesis procedure was as follows: 0.3 mL of PpPD monomer and 0.05 g of Fe 3 O 4 nanoparticles were mixed with H 3 PO 4 (1.5 mL) and dissolved in 20 mL of deionized water under supersonic stirring for 10 min at 15 • C.Then, an aqueous solution of ammonium persulfate (0.6 g in 6 mL deionized water) was added to the above solution.The mixture was incubated at room temperature overnight.The formed precipitate was separated by magnetic decantation and then washed with deionized water several times and methanol three times.Finally, the obtained dark precipitate was dried under vacuum at 50 Synthesis of Magnetic Nanoparticles Fe3O4 nanoparticles were synthesized according to the method described by Cao et al. [23].Briefly, 1.988 g of FeCl2•4H2O and 5.46 g of FeCl3•6H2O in 60 mL deionized water were mixed at room temperature.The mixture was stirred vigorously at 80 °C for 25 min.After that, NH4OH (20 mL) was quickly added into the mixture until the pH reached 10.0.After 30 min vigorous stirring, the black precipitate was separated by magnetic decantation.The samples were then washed several times with deionized water until a pH value of 7.0 was obtained.The resulting Fe3O4 nanoparticles were dried at 60 °C under vacuum for 8 h. Synthesis of PpPD/Fe3O4 Nanocomposite In light of the process described by Zhang et al. [24], the nanocomposite of PpPD with Fe3O4 was prepared by in situ doping polymerization in the presence of H3PO4 as a dopant.The overall synthetic pathway for nanocomposite (PpPD/Fe3O4) is shown in Scheme 1.The synthesis procedure was as follows: 0.3 mL of PpPD monomer and 0.05 g of Fe3O4 nanoparticles were mixed with H3PO4 (1.5 mL) and dissolved in 20 mL of deionized water under supersonic stirring for 10 min at 15 °C.Then, an aqueous solution of ammonium persulfate (0.6 g in 6 mL deionized water) was added to the above solution.The mixture was incubated at room temperature overnight.The formed precipitate was separated by magnetic decantation and then washed with deionized water several times and methanol three times.Finally, the obtained dark precipitate was dried under vacuum at 50 °C for 24 h.Scheme 1.The synthetic pathway of poly(p-phenylenediamine) (PpPD)/Fe3O4 nanocomposite. Immobilization of Lac on PpPD/Fe3O4 Nanocomposite Lac was covalently immobilized onto the PpPD/Fe3O4 nanocomposite using the glutaraldehyde activation procedure.In this procedure, 50 mg of PpPD/Fe3O4 nanocomposite was thoroughly Immobilization of Lac on PpPD/Fe 3 O 4 Nanocomposite Lac was covalently immobilized onto the PpPD/Fe 3 O 4 nanocomposite using the glutaraldehyde activation procedure.In this procedure, 50 mg of PpPD/Fe 3 O 4 nanocomposite was thoroughly washed with phosphate-buffered saline (PBS, 50 mM, pH 7.0).The pretreated carrier was submerged into a glutaraldehyde solution (7%, v/v), and then vigorously shaken for 4 h.After this, the activated carrier was washed four times with phosphate buffer, and then mixed with a Lac solution (2 mg/mL in PBS, pH 7.0).Enzyme immobilization was conducted at room temperature for 2 h.The resultant Lac-immobilized PpPD/Fe 3 O 4 nanocomposite was washed with PBS until no Lac activity was detected in the decanted wash solutions. Measurement Activity of Free and Immobilized Lac Assays of free and immobilized Lac activities were conducted spectrophotometrically using ABTS as a substrate [23].The reaction was initiated by adding 0.1 mL of the Lac solution into 2.7 mL of sodium acetate buffer solution (50 mM, pH 4.0) and 0.2 mL of ABTS (4 mmol/L), and then the mixtures are incubated at 25 • C for 3 min.The increase in absorbance of the solution was recorded at the wavelength of 420 nm.For the measurement activity of immobilized Lac, it was immediately separated from the reaction solution using a magnet after the mixtures were incubated at 25 • C for 3 min.The increase in absorbance of the supernatant was recorded at the wavelength of 420 nm.One unit of enzyme activity is defined as the amount of enzyme required to oxidize 1 µmol of ABTS per min. Evaluation of the Effects of pH and Temperature on Lac Activity and Stability The effects of reaction temperature and substrate pH were investigated by measuring the activities of free and immobilized Lac at a range of temperatures and pH.The stabilities of the immobilized and free Lac were determined as follows.The storage stabilities of the free and immobilized Lac were tested after storage at 4 • C. For the assessment of their thermal stability, free and immobilized preparations of Lac were stored in PBS at 50 • C. The preparations of Lac were withdrawn at the same timed intervals during incubation, and the residual activity of each preparation was measured.For the assessment of its operational stability, the activity of immobilized Lac was measured as follows.After each reaction run, the immobilized Lac was taken out and washed with PBS to remove any residual substrate on the PpPD/Fe 3 O 4 nanocomposite.The immobilized Lac was then added to a fresh reaction solution and its activity was detected under optimal conditions. Electrochemical Analysis Before modification, a Pt electrode was polished mechanically with 0.03 µm Al 2 O 3 powder, washed with deionized water, and then sonicated in ultrapure water for 5 min.In general, 10 µL of a dimethyl sulfoxide dispersion was prepared, which included 1 mg PpPD/Fe 3 O 4 nanocomposite or Lac-PpPD/Fe 3 O 4 nanocomposite, then it was spread onto the surface of the freshly cleaned Pt electrode to prepare a thin film, which was allowed to dry at room temperature for 1 h.Electrochemical measurements were performed at room temperature using a three-electrode system with film-immobilized Pt electrode as the working electrode, an Ag/AgCl reference electrode, and a bare Pt counter electrode.Sodium acetate buffer solution (50 mM, pH 4.0) was used as the electrolyte solution. Removal of RB-19 by Free and Immobilized Lac and Repeated Use The removal rate of RB-19 was determined by measuring the absorbance of test samples by spectrophotometry at the maximum absorbance wavelength of the dye (592 nm).The batch dye removal was carried out in 10 mL reaction mixtures containing 12 mg•L −1 dye prepared in sodium acetate buffer solution (50 mM, pH 4.0) and 0.5 U•mL −1 free or immobilized Lac.The mixtures were incubated in the dark at 25 • C for 2 h.At regular intervals, the treated samples were centrifuged Appl.Sci.2016, 6, 232 5 of 13 (9000× g for 5 min) and then the supernatants were recovered and subjected to spectrophotometric analysis to determine their content of RB-19.The magnetic biocatalysts were separated from the reaction broth using a magnet at the same time interval.The dye removal at different time intervals was determined using two different controls, namely heat-inactivated free enzyme and immobilized Lac, which were incubated at 100 • C for 15 min.The capacity of immobilized Lac for repeated dye removal was evaluated over eight cycles.After each reaction cycle, the immobilized Lac was washed several times with the buffer solution and fed into a new cycle.Dye removal (%) was calculated based on the following formula: Dye removal (%) = (A 0 − A t )/A 0 , where A 0 is the initial absorbance of the dye and A t is the absorbance at the measured incubation time point.All experiments were performed in triplicate.Data presented in the figures correspond to mean values with standard errors. Measurements The morphologies of the composites were confirmed by transmission electron microscopy (TEM; model 9000, Hitachi, Tokyo, Japan).Fourier transform infrared spectroscopy (FT-IR) analysis was carried out using a Tensor 27 spectrometer (Bruker, Karlsrohe, Germany).The electrochemical measurements were carried out on a PGSTAT302 Autolab B.V. instrument (Metrohm, Herisau, Switzerland) with a scan range of −2 to 2 V and a scan rate of 5 mV/s.Lac activity and dye removal were measured using a UV-1700 spectrophotometer (Shimadzu, Kyoto, Japan). TEM Imaging and FT-IR Analysis Representative TEM images of Fe 3 O 4 nanoparticles are shown in Figure 2a.Fe 3 O 4 magnetic particles were found to have a mean diameter of ~20 nm and to be rod-like in shape.As shown in Figure 2b, the TEM image indicated that the synthesized PpPD polymers formed nanofibrous mats, which were amorphous as determined by their electron diffraction patterns.Similar results have been reported for polyaniline/Fe 3 O 4 nanocomposite [24]. Measurements The morphologies of the composites were confirmed by transmission electron microscopy (TEM; model 9000, Hitachi, Tokyo, Japan).Fourier transform infrared spectroscopy (FT-IR) analysis was carried out using a Tensor 27 spectrometer (Bruker, Karlsrohe, Germany).The electrochemical measurements were carried out on a PGSTAT302 Autolab B.V. instrument (Metrohm, Herisau, Switzerland) with a scan range of −2 to 2 V and a scan rate of 5 mV/s.Lac activity and dye removal were measured using a UV-1700 spectrophotometer (Shimadzu, Kyoto, Japan). TEM Imaging and FT-IR Analysis Representative TEM images of Fe3O4 nanoparticles are shown in Figure 2a.Fe3O4 magnetic particles were found to have a mean diameter of ~20 nm and to be rod-like in shape.As shown in Figure 2b, the TEM image indicated that the synthesized PpPD polymers formed nanofibrous mats, which were amorphous as determined by their electron diffraction patterns.Similar results have been reported for polyaniline/Fe3O4 nanocomposite [24]. The PpPD fibers were observed to be randomly cross-linked, and the magnetic particles were aggregated with the fibers (Figure 2c).The TEM image of the PpPD/Fe3O4 composites was similar to that of the PpPD fibers, but small light dots were observed on the composite, which were absent from the image of PpPD fibers, suggesting that Fe3O4 magnetic particles were embedded into the PpPD nanofibrous mats.It has been reported that there is a strong interaction between the quinoid rings of PpPD and Fe3O4 nanoparticles that enhances the formation of PpPD/Fe3O4 nanostructures [25].The PpPD/Fe3O4 composites could be efficiently separated from the solution using a magnet.The PpPD fibers were observed to be randomly cross-linked, and the magnetic particles were aggregated with the fibers (Figure 2c).The TEM image of the PpPD/Fe 3 O 4 composites was similar to that of the PpPD fibers, but small light dots were observed on the composite, which were absent from the image of PpPD fibers, suggesting that Fe 3 O 4 magnetic particles were embedded into the PpPD nanofibrous mats.It has been reported that there is a strong interaction between the quinoid rings of PpPD and Fe 3 O 4 nanoparticles that enhances the formation of PpPD/Fe 3 O 4 nanostructures [25].The PpPD/Fe 3 O 4 composites could be efficiently separated from the solution using a magnet. The FT-IR spectra of the nanocomposites are shown in Figure 3.As seen in Figure 3a,b, a broad peak at around 3413 cm −1 corresponded to the N-H stretching vibration of the secondary amine group in the polymer chain [24].The peak at 1613 cm −1 was attributed to the stretching vibrations of benzenoid rings, and the bands at 1355 cm −1 and 1230 cm −1 corresponded to C-N stretching vibration with aromatic conjugation [26].Besides the above bands, a band at 572 cm −1 , which was attributed to Fe 3 O 4 , was clearly observed in Figure 3a,c, indicating that Fe 3 O 4 magnetic nanoparticles existed in the newly synthesized nanocomposite [27].In brief, these absorption bands confirmed the successful synthesis of the PpPD/Fe 3 O 4 composite, in agreement with the findings of TEM imaging.The FT-IR spectra of the nanocomposites are shown in Figure 3.As seen in Figure 3a,b, a broad peak at around 3413 cm −1 corresponded to the N-H stretching vibration of the secondary amine group in the polymer chain [24].The peak at 1613 cm −1 was attributed to the stretching vibrations of benzenoid rings, and the bands at 1355 cm −1 and 1230 cm −1 corresponded to C-N stretching vibration with aromatic conjugation [26].Besides the above bands, a band at 572 cm −1 , which was attributed to Fe3O4, was clearly observed in Figure 3a,c, indicating that Fe3O4 magnetic nanoparticles existed in the newly synthesized nanocomposite [27].In brief, these absorption bands confirmed the successful synthesis of the PpPD/Fe3O4 composite, in agreement with the findings of TEM imaging. Effects of pH and Temperature on Lac Enzyme Activity A schematic illustration of Lac immobilization on the PpPD/Fe3O4 nanocomposites is shown in Figure 4.Under the optimized immobilization conditions, the maximum enzyme load was about 120 mg Lac/g nanocomposite and the maximum retention was about 80% of the original Lac activity.The effect of pH on the activity of the free and immobilized Lac is shown in Figure 5a.According to the experimental results, the optimum reaction pH of free Lac was approximately 4.0, which was consistent with the results of D'Annibale et al. [28].However, the optimal pH of immobilized Lac showed a slight shift to 3.5.The activity of the immobilized Lac was higher than that of the free Lac in the pH range of 2.0-6.0, which was attributed to the unique electronic properties of PpPD.It was previously reported that PpPD is a conducting polymer and that the electrochemical process of PpPD involves proton transfer [29].Under acid conditions, the protonation of PpPD improved its conductivity property, which was propitious for electron transfer between the enzyme, carrier, and substrates.This resulted in an enhancement of the activity of immobilized Lac, compared with that of free Lac. The effects of temperature on the activities of free and immobilized Lac are shown in Figure 5b.The relative activities of both forms of Lac clearly increased with the initial increases in temperature, and then decreased with the further increases in temperature.The optimum temperature for the activity of free Lac was 60 °C, while that for the activity of immobilized Lacs was 65 °C.Immobilized Lac exhibited higher stability than free Lac did at higher temperatures.The reason may be that the PpPD/Fe3O4 nanocomposite support limited the conformational changes in enzyme molecules, protecting them to some degree from deactivation at high temperatures. Effects of pH and Temperature on Lac Enzyme Activity A schematic illustration of Lac immobilization on the PpPD/Fe 3 O 4 nanocomposites is shown in Figure 4.Under the optimized immobilization conditions, the maximum enzyme load was about 120 mg Lac/g nanocomposite and the maximum retention was about 80% of the original Lac activity.The effect of pH on the activity of the free and immobilized Lac is shown in Figure 5a.According to the experimental results, the optimum reaction pH of free Lac was approximately 4.0, which was consistent with the results of D'Annibale et al. [28].However, the optimal pH of immobilized Lac showed a slight shift to 3.5.The activity of the immobilized Lac was higher than that of the free Lac in the pH range of 2.0-6.0, which was attributed to the unique electronic properties of PpPD.It was previously reported that PpPD is a conducting polymer and that the electrochemical process of PpPD involves proton transfer [29].Under acid conditions, the protonation of PpPD improved its conductivity property, which was propitious for electron transfer between the enzyme, carrier, and substrates.This resulted in an enhancement of the activity of immobilized Lac, compared with that of free Lac. The effects of temperature on the activities of free and immobilized Lac are shown in Figure 5b.The relative activities of both forms of Lac clearly increased with the initial increases in temperature, and then decreased with the further increases in temperature.The optimum temperature for the activity of free Lac was 60 • C, while that for the activity of immobilized Lacs was 65 • C. Immobilized Lac exhibited higher stability than free Lac did at higher temperatures.The reason may be that the PpPD/Fe 3 O 4 nanocomposite support limited the conformational changes in enzyme molecules, protecting them to some degree from deactivation at high temperatures. Stability Analysis Figure 6a shows the storage stabilities of immobilized and free Lac.With an increasing storage time, the relative activities of free and immobilized Lac decreased, but free Lac was more rapidly inactivated than immobilized Lac.Free Lac lost 70% of its activity after incubation at 4 °C for 30 days, whereas immobilized Lac lost less than 40% of its activity, demonstrating that it had a higher stability in refrigerated storage than free Lac.It is well-known that storage stability is an important advantage of immobilized enzymes for application in bioprocesses.Figure 6b presents the thermal stability of immobilized Lac.It can be seen that the thermal stability of immobilized Lac on the PpPD/Fe3O4 nanocomposite was improved to a certain degree.Enzyme immobilization could greatly reduce the costs of using enzymes in practical applications.Immobilized enzymes often have other drawbacks such as low rates of enzyme recovery and recycling [12,[30][31][32][33].However, it has been proven that the use of magnetic carriers can dramatically improve the efficiency of enzyme recovery and recycling [34][35][36].Thus, the magnetically separable Lac-PpPD/Fe3O4 nanocomposite may contribute to enzyme recovery and reduce costs.As shown in Figure6 d, Lac immobilized on magnetically separable PpPD nanofibers could be efficiently separated from the reactants using a magnet.After each reaction, the Stability Analysis Figure 6a shows the storage stabilities of immobilized and free Lac.With an increasing storage time, the relative activities of free and immobilized Lac decreased, but free Lac was more rapidly inactivated than immobilized Lac.Free Lac lost 70% of its activity after incubation at 4 °C for 30 days, whereas immobilized Lac lost less than 40% of its activity, demonstrating that it had a higher stability in refrigerated storage than free Lac.It is well-known that storage stability is an important advantage of immobilized enzymes for application in bioprocesses.Figure 6b presents the thermal stability of immobilized Lac.It can be seen that the thermal stability of immobilized Lac on the PpPD/Fe3O4 nanocomposite was improved to a certain degree.Enzyme immobilization could greatly reduce the costs of using enzymes in practical applications.Immobilized enzymes often have other drawbacks such as low rates of enzyme recovery and recycling [12,[30][31][32][33].However, it has been proven that the use of magnetic carriers can dramatically improve the efficiency of enzyme recovery and recycling [34][35][36].Thus, the magnetically separable Lac-PpPD/Fe3O4 nanocomposite may contribute to enzyme recovery and reduce costs.As shown in Figure6 d, Lac immobilized on magnetically separable PpPD nanofibers could be efficiently separated from the reactants using a magnet.After each reaction, the Stability Analysis Figure 6a shows the storage stabilities of immobilized and free Lac.With an increasing storage time, the relative activities of free and immobilized Lac decreased, but free Lac was more rapidly inactivated than immobilized Lac.Free Lac lost 70% of its activity after incubation at 4 • C for 30 days, whereas immobilized Lac lost less than 40% of its activity, demonstrating that it had a higher stability in refrigerated storage than free Lac.It is well-known that storage stability is an important advantage of immobilized enzymes for application in bioprocesses.Figure 6b presents the thermal stability of immobilized Lac.It can be seen that the thermal stability of immobilized Lac on the PpPD/Fe 3 O 4 nanocomposite was improved to a certain degree.Enzyme immobilization could greatly reduce the costs of using enzymes in practical applications.Immobilized enzymes often have other drawbacks such as low rates of enzyme recovery and recycling [12,[30][31][32][33].However, it has been proven that the use of magnetic carriers can dramatically improve the efficiency of enzyme recovery and recycling [34][35][36].Thus, the magnetically separable Lac-PpPD/Fe 3 O 4 nanocomposite may contribute to enzyme recovery and reduce costs.As shown in Figure 6d, Lac immobilized on magnetically separable PpPD nanofibers could be efficiently separated from the reactants using a magnet.After each reaction, the immobilized enzyme was harvested and the recovery of enzyme activity was determined.With an increasing number of use and recovery cycles, the activity of the immobilized Lac gradually decreased owing to inactivation and product loss during each run (Figure 6c).However, more than 75% of its initial activity was retained after eight cycles.The activity of immobilized Lac was clearly stabilized compared with that of free Lac.Similar results have been reported for other nanobiocatalysts [37,38]. Appl.Sci.2016, 6, 232 8 of 13 immobilized enzyme was harvested and the recovery of enzyme activity was determined.With an increasing number of use and recovery cycles, the activity of the immobilized Lac gradually decreased owing to inactivation and product loss during each run (Figure 6c).However, more than 75% of its initial activity was retained after eight cycles.The activity of immobilized Lac was clearly stabilized compared with that of free Lac.Similar results have been reported for other nanobiocatalysts [37,38]. The above results showed that the immobilized Lac had a higher stability than its free counterpart.The improved storage and operational stabilities of the immobilized Lac can be attributed to the covalent bonding between the Lac and PpPD nanofibers, which improves the enzyme's stability against conformational denaturation under extreme conditions. Removal of RB-19 Figure 7 shows the removal of RB-19 by free Lac, immobilized Lac, and carrier (PpPD/Fe3O4 nanocomposite without Lac) versus time.For immobilized Lac, the removal rate increased rapidly during the first 1 h of treatment and reached 80%.In comparison, the removal efficiency of dye by free Lac only reached 20%.For the carrier, approximately 40% dye removal was obtained in the first 1 h of incubation, and subsequently no significant increase was found.The primary charge on the surface of PpPD/Fe3O4 nanocomposite was positive at pH 4. Thus, the adsorption of dye on the carrier was partly attributed to ionic electrostatic attractions between the positively charged sites of the carrier surface and the negative sulfonyl (−SO3) groups of dye molecules (Figure 1).Additionally, other adsorption mechanisms such as hydrogen bonding may play a partial role, which may be due to the interactions of groups on the surface of carrier with groups of the RB19 molecule [39].The dye removal efficiency of immobilized Lac (80%) was clearly higher than that of free Lac under the same conditions, and was even higher than the sum of dye removal by free Lac and the carrier (60%), which can be attributed to two causes.The first is the dye adsorption by the carrier that contributes to dye removal.The second is increments in solution conductivity brought about by the immobilization of The above results showed that the immobilized Lac had a higher stability than its free counterpart.The improved storage and operational stabilities of the immobilized Lac can be attributed to the covalent bonding between the Lac and PpPD nanofibers, which improves the enzyme's stability against conformational denaturation under extreme conditions. Removal of RB-19 Figure 7 shows the removal of RB-19 by free Lac, immobilized Lac, and carrier (PpPD/Fe 3 O 4 nanocomposite without Lac) versus time.For immobilized Lac, the removal rate increased rapidly during the first 1 h of treatment and reached 80%.In comparison, the removal efficiency of dye by free Lac only reached 20%.For the carrier, approximately 40% dye removal was obtained in the first 1 h of incubation, and subsequently no significant increase was found.The primary charge on the surface of PpPD/Fe 3 O 4 nanocomposite was positive at pH 4. Thus, the adsorption of dye on the carrier was partly attributed to ionic electrostatic attractions between the positively charged sites of the carrier surface and the negative sulfonyl (−SO 3 ) groups of dye molecules (Figure 1).Additionally, other adsorption mechanisms such as hydrogen bonding may play a partial role, which may be due to the interactions of groups on the surface of carrier with groups of the RB19 molecule [39].The dye removal efficiency of immobilized Lac (80%) was clearly higher than that of free Lac under the same conditions, and was even higher than the sum of dye removal by free Lac and the carrier (60%), which can be attributed to two causes.The first is the dye adsorption by the carrier that contributes to Appl.Sci.2016, 6, 232 9 of 13 dye removal.The second is increments in solution conductivity brought about by the immobilization of Lac on PpPD/Fe 3 O 4 nanocomposite, which may be an important factor affecting the removal efficiency.Figure 8 shows that the current of the immobilized Lac-PpPD/Fe 3 O 4 nanocomposite was higher than that of the carrier without Lac, suggesting that electron transfer occurs between Lac and the PpPD fiber.Figure 9 shows the schematic illustration of the interaction mechanisms between Lac, PpPD fiber, and RB-19.It can be speculated that the conducting PpPD/Fe 3 O 4 fiber mat may act as a "molecular wire" and provide effective channels for electron transfer.After the RB-19 molecules were adsorbed to the PpPD/Fe 3 O 4 nanocomposite, the carrier may promote electron transfer from the redox centers of Lac to the dye and enhance the catalytic activity of Lac [40,41].Similar results have been reported for catalase covalently bound to electrospun nanofiber meshes filled with carbon nanotubes [40].It was also reported that the conducting carbon nanotubes were able to promote direct electron transport from redox enzymes such as ascorbate oxidase, peroxidase, and Lac [38,[40][41][42][43][44][45][46].Therefore, PpPD/Fe 3 O 4 nanocomposite may also be suitable for enzyme immobilization, especially for redox enzymes.Lac on PpPD/Fe3O4 nanocomposite, which may be an important factor affecting the removal efficiency.Figure 8 shows that the current of the immobilized Lac-PpPD/Fe3O4 nanocomposite was higher than that of the carrier without Lac, suggesting that electron transfer occurs between Lac and the PpPD fiber.Figure 9 shows the schematic illustration of the interaction mechanisms between Lac, PpPD fiber, and RB-19.It can be speculated that the conducting PpPD/Fe3O4 fiber mat may act as a ''molecular wire'' and provide effective channels for electron transfer.After the RB-19 molecules were adsorbed to the PpPD/Fe3O4 nanocomposite, the carrier may promote electron transfer from the redox centers of Lac to the dye and enhance the catalytic activity of Lac [40,41].Similar results have been reported for catalase covalently bound to electrospun nanofiber meshes filled with carbon nanotubes [40].It was also reported that the conducting carbon nanotubes were able to promote direct electron transport from redox enzymes such as ascorbate oxidase, peroxidase, and Lac [38,[40][41][42][43][44][45][46].Therefore, PpPD/Fe3O4 nanocomposite may also be suitable for enzyme immobilization, especially for redox enzymes.Lac on PpPD/Fe3O4 nanocomposite, which may be an important factor affecting the removal efficiency.Figure 8 shows that the current of the immobilized Lac-PpPD/Fe3O4 nanocomposite was higher than that of the carrier without Lac, suggesting that electron transfer occurs between Lac and the PpPD fiber.Figure 9 shows the schematic illustration of the interaction mechanisms between Lac, PpPD fiber, and RB-19.It can be speculated that the conducting PpPD/Fe3O4 fiber mat may act as a ''molecular wire'' and provide effective channels for electron transfer.After the RB-19 molecules were adsorbed to the PpPD/Fe3O4 nanocomposite, the carrier may promote electron transfer from the redox centers of Lac to the dye and enhance the catalytic activity of Lac [40,41].Similar results have been reported for catalase covalently bound to electrospun nanofiber meshes filled with carbon nanotubes [40].It was also reported that the conducting carbon nanotubes were able to promote direct electron transport from redox enzymes such as ascorbate oxidase, peroxidase, and Lac [38,[40][41][42][43][44][45][46].Therefore, PpPD/Fe3O4 nanocomposite may also be suitable for enzyme immobilization, especially for redox enzymes.The reusability of immobilized enzymes is important because of its influence on processing costs in wastewater treatment industries [47].Figure 10 shows that when the immobilized enzyme was subjected to repeated use and recovery cycles, for the first two cycles, the percentage of dye removal by Lac-PpPD/Fe3O4 remained the same (90%), and then the recovery efficiency decreased gradually with every cycle, reaching 43% at the eighth cycle.The decreased dye removal efficiency may be related to several factors.First, the enzyme may be inactivated or inhibited by the accumulation of dye degradation products [48].Second, mass transfer limitations may be induced by dye or metabolite adsorption to the PpPD/Fe3O4 carrier [49,50].Moreover, during the catalytic reaction, some of the Fe3O4 nanoparticles were found to be released from the PpPD/Fe3O4 nanocomposite due to vigorous stirring, leading to a reduction in the magnetic force of the carriers.Thus, the immobilized Lac may have partly been lost along with carrier molecules that had lost their magnetism during separation of the immobilized enzyme from the reaction solution using a magnet.The loss of Fe3O4 nanoparticles from the nanocomposite will be a challenge for the practical application of Lac-PpPD/Fe3O4.This problem may be resolved by covalently binding Fe3O4 nanoparticles with PpPD polymers.The reusability of immobilized enzymes is important because of its influence on processing costs in wastewater treatment industries [47].Figure 10 shows that when the immobilized enzyme was subjected to repeated use and recovery cycles, for the first two cycles, the percentage of dye removal by Lac-PpPD/Fe 3 O 4 remained the same (90%), and then the recovery efficiency decreased gradually with every cycle, reaching 43% at the eighth cycle.The decreased dye removal efficiency may be related to several factors.First, the enzyme may be inactivated or inhibited by the accumulation of dye degradation products [48].Second, mass transfer limitations may be induced by dye or metabolite adsorption to the PpPD/Fe 3 O 4 carrier [49,50].Moreover, during the catalytic reaction, some of the Fe 3 O 4 nanoparticles were found to be released from the PpPD/Fe 3 O 4 nanocomposite due to vigorous stirring, leading to a reduction in the magnetic force of the carriers.Thus, the immobilized Lac may have partly been lost along with carrier molecules that had lost their magnetism during separation of the immobilized enzyme from the reaction solution using a magnet.The loss of Fe 3 O 4 nanoparticles from the nanocomposite will be a challenge for the practical application of Lac-PpPD/Fe 3 O 4 .This problem may be resolved by covalently binding Fe 3 O 4 nanoparticles with PpPD polymers.The reusability of immobilized enzymes is important because of its influence on processing costs in wastewater treatment industries [47].Figure 10 shows that when the immobilized enzyme was subjected to repeated use and recovery cycles, for the first two cycles, the percentage of dye removal by Lac-PpPD/Fe3O4 remained the same (90%), and then the recovery efficiency decreased gradually with every cycle, reaching 43% at the eighth cycle.The decreased dye removal efficiency may be related to several factors.First, the enzyme may be inactivated or inhibited by the accumulation of dye degradation products [48].Second, mass transfer limitations may be induced by dye or metabolite adsorption to the PpPD/Fe3O4 carrier [49,50].Moreover, during the catalytic reaction, some of the Fe3O4 nanoparticles were found to be released from the PpPD/Fe3O4 nanocomposite due to vigorous stirring, leading to a reduction in the magnetic force of the carriers.Thus, the immobilized Lac may have partly been lost along with carrier molecules that had lost their magnetism during separation of the immobilized enzyme from the reaction solution using a magnet.The loss of Fe3O4 nanoparticles from the nanocomposite will be a challenge for the practical application of Lac-PpPD/Fe3O4.This problem may be resolved by covalently binding Fe3O4 nanoparticles with PpPD polymers. Conclusions In conclusion, we have successfully fabricated PpPD/Fe 3 O 4 nanocomposite with a high electrical conductivity for use as a carrier for enzyme immobilization.The morphology and chemical structure of the nanocomposite were characterized using TEM and FT-IR.Both the pH and temperature optima of immobilized Lac showed a slight shift compared with those of free Lac, and the immobilized Lac exhibited a higher activity under conditions of acidic pH.Moreover, the immobilized Lac on PpPD/Fe 3 O 4 nanofibers showed excellent characteristics, such as high storage, thermal and operational stabilities, easy recovery, and high dye removal efficiency.These advantageous characteristics were related to the electrical conductivity and biocompatible microenvironment provided by the PpPD.Our results indicated that PpPD/Fe 3 O 4 nanocomposite may be an appropriate carrier for enzyme immobilization.Lac-PpPD/Fe 3 O 4 has potential applications in dye wastewater treatment. Figure 5 . Figure 5. Effects of (a) pH and (b) temperature on Lac activity. Figure 5 . Figure 5. Effects of (a) pH and (b) temperature on Lac activity. Figure 5 . Figure 5. Effects of (a) pH and (b) temperature on Lac activity. Figure 6 . Figure 6.(a) Storage stability; (b) thermal stability; and (c) operational stability of free and immobilized Lac; (d) Recovery of immobilized Lac from the solution using a magnet. Figure 6 . Figure 6.(a) Storage stability; (b) thermal stability; and (c) operational stability of free and immobilized Lac; (d) Recovery of immobilized Lac from the solution using a magnet. Figure 7 . Figure 7. Removal of dye by free Lac, immobilized Lac, and carrier versus time. Figure 7 . Figure 7. Removal of dye by free Lac, immobilized Lac, and carrier versus time. Figure 7 . Figure 7. Removal of dye by free Lac, immobilized Lac, and carrier versus time. Figure 9 . Figure 9. Proposed reaction scheme for dye removal catalyzed by Lac immobilized on PpPD/Fe3O4. Figure 9 . Figure 9. Proposed reaction scheme for dye removal catalyzed by Lac immobilized on PpPD/Fe 3 O 4 . Figure 9 . Figure 9. Proposed reaction scheme for dye removal catalyzed by Lac immobilized on PpPD/Fe3O4. • C for 24 h. Appl.Sci.2016, 6, 232 5 of 13 with the buffer solution and fed into a new cycle.Dye removal (%) was calculated based on the following formula: Dye removal (%) = (A0 − At)/A0, where A0 is the initial absorbance of the dye and At is the absorbance at the measured incubation time point.All experiments were performed in triplicate.Data presented in the figures correspond to mean values with standard errors.
9,903.6
2016-08-17T00:00:00.000
[ "Chemistry", "Environmental Science", "Materials Science" ]
Stakeholders’ Risk Perception: A Perspective for Proactive Risk Management in Residential Building Energy Retrofits in China The implementation of energy retrofit of residential buildings faces many risks around the world, especially in China, leading to low retrofit progress. Stakeholders’ proactive risk management is the key to the smooth implementation of retrofit projects but is normally affected by risk perception. Perceived risks instead of real risks are the motivators of their proactive behaviours. This paper aims to understand and address the present risk perception of stakeholders in order to drive effective proactive risk mitigation practices. Based on a risk list identified through a literature review and interviews, a questionnaire survey was then made to analyse and compare different stakeholders’ perceptions of each risk by measuring the levels of their concern about risks. It is validated that all the stakeholder groups tend to mitigate risks perceived highly proactively. Proactive risk management of risk-source-related stakeholders deserves more attention and responsibility-sharing with transaction costs (TCs) considerations contribute to the enhancement of risk perception. More responsibilities of construction quality and maintenance is taken by the government and contractors should be clarified, and the government should also be responsible for assisting design work. Effective information is beneficial to the decrease in homeowners’ risk perception that can motivate their initiative of cooperation. Introduction Building energy use has become the main driver for the growing worldwide energy consumption and CO 2 emissions. Worldwide, 28% of CO 2 emissions and 30% of final energy consumption were attributed to the building sector in 2018 [1]. In particular, the final energy consumption of residential buildings accounts for over 70% of the global total [1]. The continuous global growth in building end uses mainly driven by heating, lighting, and household cooking [2]. The most striking increase in energy intensity per unit of floor area is related to space cooling with a growth of nearly 10% from 2014 to 2018 [3,4]. In China, building energy consumption was 899 million tonnes coal equivalent (tce), and CO 2 emissions were 1.96 billion tons in 2016, accounting for 20.6% and 19.4% of the national total quantity, respectively, in which energy consumption and carbon emissions of urban residential buildings share 38% and 41%, respectively [5]. Meanwhile, China has also experienced rapid growth of energy demand for space cooling over the past two decades, increasing at 13% per year since 2000 and even reaching 50% of peak electricity demand in recent summers, which leads to a large increase in CO 2 emissions [6]. Sustainable buildings are the key factors to mitigate such environmental impacts, and this goal can be achieved by replacing inefficient building elements with more efficient ones [7]. as a more practical way towards project objectives. Smith and Merritt [26] also believed that proactive risk management could effectively control uncertainty. Uncertainty is one of the primary transaction characteristics and also increases transaction costs (TCs) in the transaction process [27]. TCs appear throughout the whole process of energy retrofit projects and originate from due diligence, negotiations, and monitoring [28]. When TCs are too large, the exchange, production, and economic growth would be inhibited [29]. Proactive risk management, an effective manner of controlling uncertainty, can lower TCs and thereby eliminate the barriers to energy retrofit implementation for a smooth retrofit process. However, stakeholders' proactive behaviours have not been considered by studies on energy retrofits of residential buildings as risk mitigation measures. The previous studies tended to analyse risks from the perspectives of the energy efficiency gap and investment benefits [15,[30][31][32][33][34] and viewed risks as the basis for the selection of retrofit solutions [35,36]. Risk mitigation focuses on the development of energy-savings insurance to transfer risks of investors [37,38]. These measures aim to safeguard investors' interests rather than to eliminate the barriers to the smooth implementation of the whole energy retrofit process. Stakeholders' proactive behaviours for risk mitigation are generally aimed at their perceived risk. The connections between risk management and project success are dependent on three elements: stakeholders, their behaviours, and their risk perception [39,40]. Indeed, the contributions of risk management to success mostly result from the impacts of risk perception on stakeholders' behaviours, namely that stakeholders adjust their behaviours according to their perception of risks [41,42]. Risk perception is a kind of subjective evaluation of risks by stakeholders and is based on the type of risk, personal experience, beliefs, attitudes, and culture [43,44]. Stakeholders' perception of risk is based on the simplified decision-making process rather than real situations, and different culture also leads to their differences in subjective rationality and further in risk perception [45]. Differences and contradictions in risk perception among different project stakeholders result in the misunderstanding and conflicts of risk mitigation practices [46]. Uncertainty avoidance is the core principle of stakeholders' behaviours [47]. If a potential risk is perceived by stakeholders to be high, they will take measures to mitigate it [48]. However, these stakeholders' actions aiming to mitigate risks produce TCs. TCs, in turn, affect stakeholders' behavioural selection. Transaction cost is an essential factor when transaction parties make trading decisions [49]. Stakeholders themselves have motivations to economize on TCs to maximize their own benefits. High TCs can be the barriers to stakeholders' proactive behaviours for risk mitigation. As with individuals' behaviours, TCs incurred by these behaviours are also subjective [50]. In effect, stakeholders who voluntarily bear high TCs tend to expect higher benefits [51]. Such behavioural conflicts among different stakeholders resulting from different risk perceptions and TCs may render those bearing high TCs unable to obtain the benefits as expected, which would lead to the dissatisfaction of some stakeholders and further influence the smooth implementation of retrofit projects. Risk perception can motivate stakeholders' proactive risk management, which is the key to the smooth implementation of energy retrofit projects. The differences in risk perception among different stakeholders lead to the contradictions of risk mitigation practices, and TCs play an important role in stakeholders' behavioural conflicts arising from contradictions of risk perception. This paper aims to analyse and address different stakeholders' perceptions of risks in order to motivate stakeholders' initiative of effective risk management. This paper first establishes a risk list through both a literature review and interviews to connect the risks in the whole process of energy retrofit in China with the main stakeholders. Interviews are also made to explore stakeholders' proactive behaviours for risk mitigation in practice. A questionnaire survey is then conducted to examine and compare different stakeholders' perceptions of each risk by measuring the levels of their concern about risks. A validation is conducted to link high levels of risk concern with proactive risk management. Finally, some suggestions with TCs considerations are given under different risk perceptions of stakeholders to drive the effectiveness and feasibility of proactive risk mitigation practices. Risk Perception There is no agreement about the measurement of individuals' risk perception, and risk perception is regarded as a complex construct [52]. It is significant for studies on risk perception to choose the proper risk dimensions according to the study purpose [53]. Different items have been used by previous studies to help shape risk perception, including cognitive, emotional, societal, and subconscious factors [54][55][56][57]. In particular, cognition and emotion are the most common and are generally viewed as the main dimensions of risk perception. The cognitive dimension means the perceived likelihood and severity of risks, while the emotional dimension refers to the feelings of worry and anxiety [58]. Sjöberg [53] stated that risks cannot give rise to emotional perception but cognitive. It was also highlighted that risk perception required a more rational judgment, and people seldom determined their judgment of risks based on emotions. However, Hartono, et al. [59] argued that decision-makers tend to make decisions based on their intuition and feelings rather than the normative theory (e.g., the probability and consequences of risks). Indeed, some studies on cognition also emphasized that individuals' cognitive ability is limited due to their bounded rationality [60,61]. It is believed that emotions (e.g., worry and fear) can motivate people to self-protect [15,[30][31][32][33][34]. In short, both cognitive and emotional factors should be considered in the judgment process of risk perception [62]. The concern is a concept involving both cognitive and affective dimensions and can be used to measure stakeholders' perceptions of risks. Dunwoody and Neuwirth [58] viewed concern as an affective judgment of risk perception, but the concern was regarded by Rundmo and Iversen [63] and Brown, et al. [64] as a more cognitive notion in risk perception. Likewise, Rundmo [65] thought that concern is one aspect of effect but is associated with cognitive risk perception. Worry is generally viewed as an active emotional state and is close to adaptive behaviours for risk mitigation [66,67]. Concern can be seen as those worried and upset topics and is closely related to actionable worry [68]. Concern itself can be used to affect people's behaviours, and certain levels of concern can motivate people to take action to handle risks [69,70]. In fact, the concept of concern has been adopted by some studies to measure risk perception. Wildavsky and Dake [71] evaluated the perception of technical, environmental, social, and economic risks based on a series of people's concerns. Similarly, how much people have concerns about risks is also used to refer to the levels of their risk perception [72][73][74]. Based on the Gallup environment surveys in which respondents were asked the degree of their concern about economic and social problems, Xiao and McCright [75] formed the measurement framework of risk perception. Mou and Lin [76] also used the level of risk concern to measure the public's perceived level of risks related to food supply and handling. As a result, this paper also applied stakeholders' concerns about risks to the measurement of risk perception. Behaviours Related to Risk Perception The role of perception in precautionary and protective behaviours has been highlighted in many studies. There is an assumption in the protection motivation theory [77] that individuals' perception of the severity of a threat and the effectiveness of mitigation measures is the basis of their protective behaviours. The protective action decision model [78,79] also points out the roles of perception in protective behaviours and postulates that risk perception has impacts on decision making on mitigation measures. In addition, the prospect theory [80] also aims to predict the individuals' behavioural responses to different risk perceptions. This theory argues that there is a negative connection between risk perception and risk-taking behaviours (e.g., risk-averse and risk-seeking) [81]. Rogers [82] stated that an individual's perception of risks facilitated their engagement in protective behaviours. Risk perception contributes to individual perception of their responsibility on environment protection [83]. The individuals with a high perception of environmental risks have stronger intentions to take environmentally friendly actions [84,85]. It has been found in the studies on disasters and hazards that risk perception can predict warning responses of reducing the losses from disaster risks [86]. People with high-risk perception are more likely to take preventive actions than their counterparts with low-risk perception [87,88]. Adams [89] described the relationships between safety perception and risk status and pointed out that the increase in safety perception could motivate individuals to have compensation behaviours to lower risk levels. Loosemore, et al. [90] applied this logic to the construction field in order to drive people to adjust their behaviours for risk mitigation. The differences in risk perception among different groups lead to the diversity of their practices in risk mitigation [91]. In short, risk perception is an important motivator of stakeholders' proactive risk management. Transaction Costs (TCs) Considerations TCs are different from production costs and are the economic equivalent of friction in physical systems [92]. TCs are influenced by three main transaction dimensions, including asset specificity, uncertainty, and frequency [93]. Asset specificity is usually defined as "durable investments that are undertaken in support of a particular transaction" [92]. Uncertainty is classified as environmental and behavioural uncertainty. Environmental uncertainty means that transaction circumstances cannot be specified beforehand, leading to an increase in time and processes for monitoring and controlling against ecological diversity [94]. Behavioural uncertainty refers to transaction partners concealing and distorting information [92]. Stakeholders need to bear high TCs when involved in the interactions for risk mitigation [22], such as the costs of learning knowledge, collecting information, supervising construction work, and exploring new technical schemes. Preventive behaviours originating from high-risk perception were based on low costs of behavioural change [95]. People who have positive attitudes towards proactive behaviours may not be able to put such behaviours into practice due to the lack of resources [96]. In fact, risk perception is associated with people's ability to understand and respond to risks and objective risk attributes [97,98]. Probability and impact are the main attributes of risks, which have the features of uncertainty. From a TCs perspective, asset specificity in risk management service transactions can be considered as the capability of different transaction parties for risk management [99]. In addition, the degree of people's concern about risks and their experience in risk management have essential impacts on their ability of information acquisition and processing, which also further affects their risk preparedness behaviours [100]. That also means that proactive risk management practices related to risk perception are restricted by uncertain information and specified assets concerning stakeholders' experience. Proactive risk management involves stakeholders' participation, risk management commitment, and initiating risk management processes early in the project [25]. Proactive risk management can be regarded as the activities of stakeholders' establishing and managing committee, and the success of proactive risk management efforts depends on the commitment of stakeholders' risk management. In the Chinese context of the residential energy retrofit, risk perception is concerned with environmental uncertainties about stability of retrofit policy, the ambiguity of retrofit performance, the complexity of design, the complexity of construction, and even maturity of the retrofit market in terms of technology, competence, and materials. Behavioural uncertainty is based on stakeholders' opportunism, and commitment can help prevent opportunism [101,102]. Behavioural uncertainty in risk management transaction is related to stakeholders' commitment to risk management [103]. Asset specificity and uncertainty incur more TCs in risk management service transaction and thereby prevent stakeholders from undertaking proactive risk management practices. Based on transaction costs theory (TCT), the major characteristics of proactive risk management affected by risk perception include: (1) stakeholders' experience and ability in terms of risk management, which are the main specified assets of proactive risk management; (2) environmental uncertain factors related to proactive risk management; (3) stakeholders' commitment to risk management, which corresponds to behavioural uncertainty. Research Methodology The national documents provide a generic scope for retrofitting objects of residential buildings in China. In general, priority for energy efficiency retrofitting is given to the residential buildings with good seismic and structural safety performance and poor thermal performance of the building envelope [104]. These buildings were constructed with few energy efficiency measures, and residents need to consume a great deal of energy to improve the indoor thermal environment. At present, the comprehensive retrofitting mode for residential quarters is encouraged [9]. In this pattern, there are not only energy efficiency measures, but also those regarding environment improvement, infrastructure renovation, structure reinforcement, etc. There are some differences in the scopes of retrofitting objects among different provinces in the HSCW zone, but old residential quarters are the common focus of energy retrofitting. These residential quarters have been generally used for at least a dozen years, and consist of several multi-story apartment buildings. This paper takes Anhui province in the HSCW zone of China as the object of empirical analysis. There are five basic criteria for the retrofitting scope in Anhui province: residential quarters were constructed and delivered before 31st December in 2000; the gross floor area is not less than 5000 m 2 ; these quarters are not involved in other renovation plans (e.g., urban renewal, shantytown renovation, and urban village renovation); the lands of these residential quarters are owned by the nation; and these apartment buildings are composed of complete residential packages including living rooms, bedrooms, a kitchen, a bathroom, etc. Literature Review This paper conducted a systematic review to identify the theoretical risks. Articles considered in the literature review were related to energy retrofitting of residential buildings and published in international scientific journals up to March 2018. Google Scholar was the main database for the literature search. Several keywords used for searching articles were classified as three categories as follows: (1) "energy retrofitting" and "energy renovation"; (2) "residential buildings" and "housing" (3) "risks", "uncertainty", and "barriers". This paper selected one keyword from each category in each search and combined them to search articles, such as "energy retrofitting", "housing", and "risks". Interview The risks were identified through literature review and face-to-face interviews in China. Based on a field survey, this paper divides the main stakeholders in retrofit projects into four groups, namely homeowners, governments, designers, and contractors. Interviewees were directly related to energy retrofitting in Anhui province in China and were mostly from energy retrofit cases in three cities, including ten government officials, four designers, four on-site construction managers, and four homeowners. In these projects, doors and windows were replaced by those with higher levels of insulation, and new thermal insulation materials were also used to strengthen the insulation of walls and roofs. The government representatives were selected from four levels of government departments of housing and construction, including the provincial government, the municipal government, the district government, and the sub-district administrative office. Except for the provincial government, the interviewees from the other three levels of government were almost always involved in all stages of the energy retrofitting projects in practice. For this reason, government interviewees are not only familiar with all processes in retrofitting projects but also are qualified for the identification of risks existing in each stage. In particular, interviewees from sub-district administrative offices keep in close touch with contractors and homeowners, which also enables them to know something about the risks associated with these two stakeholder groups. The industry stakeholder representatives were the chief leading members in charge of the retrofitting design and construction in practice. All of them were involved in three pilot retrofitting projects in Anhui province. As the main stakeholder groups, these interviewees from design and construction companies have a more comprehensive view of the risks occurring at the stages of design and on-site construction and can provide more detailed information about these risks. The homeowner representatives were from three pilot projects and were also the members of either homeowners' committees or neighbourhood committees in the local residential quarters. There are 612 households in total in these three projects. The homeowners' committee acts on behalf of all the homeowners in a residential quarter. Members of homeowners committees gathered homeowners' requirements and suggestions in the course of retrofitting implementation, and reported them to other retrofitting parties. Neighbourhood committees played a similar role in the retrofitting projects. Two interviewees were both neighbourhood committee staff and homeowners. There are no homeowners' committees in some renovated residential quarters, and members of neighbourhood committee are therefore responsible for information transmission in practice. As members of homeowners committees and neighbourhood committees, these interviewees have a better understanding of the potential project risks than ordinary homeowners. These interviewees introduced the work and responsibilities of their own stakeholder groups and elaborated on the problems they encountered and their concerns in the course of project implementation. Meanwhile, they were also asked about some proactive measures taken in practice for risk mitigation. Interviewees' views were taken into consideration to adjust the theoretical risks to the Chinese context. The risk list is shown in Table 1, in which 21 risks exist in the whole process of residential energy retrofit projects in China. Table 1. Risks in the whole process of energy retrofit projects in practice. Phases Risks Literature Sources Questionnaire Survey According to the above risk list, a questionnaire survey was conducted to explore the concern of different stakeholder groups on different risks in the whole process of energy retrofit projects in China. This questionnaire comprised two sections: (1) background information of the respondents; (2) respondents' concern about different risks. In the second part of this questionnaire, a Likert scale of 1-5 was used to measure the level of stakeholders' concern about a risk from their subjective point of view (1 = not concerned, 2 = a little concerned, 3 = neutral, 4 = somewhat concerned, 5 = very concerned). The questionnaires were distributed via personal delivery to increase the response. The questionnaires were targeted at people representing four different stakeholder groups, including governments, homeowners, contractors, and designers. A total of 450 questionnaires were delivered to the respondents. A total of 172 valid questionnaires were collected from 44 government officials, 55 homeowners, 38 construction managers, and 35 designers, respectively. This rate is 38.2 % and is acceptable and common. These respondents have been involved in the energy retrofitting projects in five cities of Anhui province in China. Hefei, the capital of Anhui province, has been listed as the pilot city of energy retrofitting of residential buildings in the HSCW zone of China in 2012. Since 2016, the provincial government has encouraged applying energy efficiency measures to the province-wide existing residential buildings. Anhui province operated more than 300 energy retrofitting projects by 2019. The government respondents, as the decision-makers and executors, were involved in all the retrofitting projects in the city where they work. The respondents from the design and construction companies were also the participants of the completed retrofitting projects. Moreover, these design companies generally undertake the design work of most energy retrofitting projects in their own cities. All the homeowners involved in this questionnaire survey were related to the comprehensive energy retrofitting projects that have been completed. In fact, retrofitting items (e.g., exterior windows, sunshade, roof, exterior wall, etc.) were only partially executed in the majority of energy retrofitting projects in Anhui province and even in the HSCW zone. There were only three comprehensive energy retrofitting projects in Anhui in 2017, and these respondents are some of the owners of the three retrofitting projects. These homeowners had more exposure to other participants and difficult retrofitting work due to the comprehensiveness of retrofitted building items. The complexity of comprehensive projects also enables them to have a more holistic perception of project risks. Comprehensive energy retrofitting is the major trend and is being advocated by more governments. The views of these respondents can also provide lessons for future retrofitting projects. Data Analysis Method The data collected from questionnaires were analysed from three aspects of the comparison of risk concern within each stakeholder group, the comparison of risk concern among all stakeholder groups, and the comparison of risk concern within different pairs of groups. First, the degree of concern on all risks in each stakeholder group was measured by mean scores, which can investigate the rankings of risks in terms of stakeholders' concern. Second, one-way ANOVA was applied to compare the mean scores of all the stakeholder groups in order to find out the main differences in stakeholders' concern about all risks from an overall perspective. Levene's test for equality of variances was applied to assess the assumption of homogeneity of variance that there was no difference in the variances among all the groups prior to one-way ANOVA. Variances among the four groups were proved to be equal if the significance value (p-value) was over 0.05, and the concern among all of the groups could be compared based on the p-value of one-way ANOVA. If not, Welch's test was used to adjust the results of one-way ANOVA, and a p-value of less than 0.05 also served as a standard to measure the significance of differences. Welch's test is considered more reliable when variances are unequal [114,115]. Third, for those risks with significant differences among all the four groups, Scheffe's test or Games-Howell test were adopted to compare risk concern within different pairs of groups according to the results of the abovementioned Levene's test, and the threshold value p was also 0.05. Scheffe's test is the most common for equal variances, and there is no need for each group to contain the same sample size. This test can also be used to make all possible comparisons among group means, not just planned pairwise comparisons. Games-Howell test is suitable when the variances are unequal and also does not assume the same sample size among all of the groups. Moreover, this test is appropriate for the results of Welch's test. Comparison of Risk Concern within Each Stakeholder Group The degrees of concern on all risks of each stakeholder group are measured by mean scores, and the standard deviation (SD), the coefficient of variation (CV), and rankings are also summarized (see Table 2). Table 2. Mean scores of concern of different stakeholder groups on risks. Note: "*" means that the level of stakeholders' concern of this risk is high (with the mean of above 4). SD and CV are the common measures of data dispersion. Narrow SD and CV indicate that data are stable and reliable and that respondents in the same group reach a consensus on the level of risk concern. The range of mean ± 1.64 SD is viewed as the consensus criterion for the items with a four-point Likert scale [116,117]. A wider range can be used for the consensus evaluation in this study with a five-point Likert scale. It is shown in Table 2 that all the SDs are below 1.46 and that the SDs of almost all the risks with high levels of stakeholders' concern (with the mean of above 4) in each group are below 0.80. Compared to SD, CV is a more standardized measure of statistics data dispersion and is calculated as SD divided by the mean. A CV below 0.5 is believed to indicate a reasonable and good internal agreement [118,119]. All the coefficients of variation (CVs) listed in Table 2 are below 0.5. In particular, the CVs of almost all the risks with high levels of stakeholders' concern in each group are below 0.2. The government is concerned with all risks because none of the scores are less than 3.09. Among all risks, lack of awareness of energy-efficient retrofitting (R3), poor performance in cooperation (R17), and insufficient funds available (R5) are given the highest scores, followed by lack of government departments' coordination and support (R4), opportunistic renegotiation (R18), frequent change in demolition policies (R1) and difficulties in post-retrofit repair (R21) that also score more than 4.05. These risks are caused by homeowners' poor understanding and cooperation. Similarly to the government, all risks are concerned by homeowners, and the scores range from 3.24 to 4.38. The scores of moral hazard (R13), difficulties in post-retrofit repair (R21), unqualified building materials (R10), inadequate maintenance (R20), poor safety management (R16), lack of construction skills (R12), and poor quality of old residential buildings themselves (R14) are more than 4 and are dominant among all risks. These risks are associated with project quality and safety. Contractors have the most significant concern about opportunistic renegotiation (R18), poor performance in cooperation (R17), and poor safety management (R16). The three risks exist in the phase of site implementation and are associated with homeowners. Designers express more concern about four risks of moral hazard (R13), lack of appropriate technical standards (R9), insufficient information regarding the buildings (R6), and uncertainty on the on-site conditions (R7) with scores of over 4. These risks are relevant to drawing a retrofitting plan and implementing the plan. Comparison of Risk Concern among All Stakeholder Groups Levene's test for equality of variances is first conducted, and the test results with a significance value (p-value) are shown in Table 3. According to the results, the assumption of homogeneity of variance is only valid for the risk of poor construction management (R15). The results of one-way ANOVA can be used to directly compare the concern of all the stakeholder groups on R15, while the Welch test is applied in judging the significance of differences in the other 20 risks (shown as Tables 4 and 5). Stakeholder groups hold different opinions on most of the risks, but there is no significant difference in the concern regarding three risks, namely lack of government departments' coordination and support (R4), lack of technical staff with specific expertise (R8) and poor quality of old residential buildings themselves (R14). As a whole, lack of government departments' coordination and support (R4) and poor quality of old residential buildings themselves (R14) are given more concern by almost all the stakeholder groups, while lack of technical staff with specific expertise (R8) is ranked in the middle and lower tiers by all stakeholders. Every stakeholder group expresses expectations to obtain the support of the relevant government departments in order to seek an extremely favourable environment to work. Likewise, quality problems attributed to buildings themselves have severe negative impacts on retrofitting quality that is focused on all of the groups. By contrast, designers' capacities are not paid too much attention. Comparison of Risk Concern within Different Pairs of Groups and the Corresponding Proactive Measures According to the results of the test of homogeneity of variances, Scheffe's test is adopted to make comparisons on R15 between any two stakeholder groups, while Games-Howell test is used to compare the other risks except R4, R8 and R14. The test results with mean difference and significance value (p-value) are shown in Table 6 (G = Government, H = Homeowners, C = Contractors, D = Designers). There is no particular stakeholder group with significant differences from all the others in terms of risk concern, but the differences between government and designers and between homeowners and contractors are the most significant among all the six pairs of comparisons. Table 7 summarized the risks with great concern for each stakeholder group (based on Table 2) and also highlighted the stakeholder groups who have significantly less concern about each risk than the former group (based on Table 6) (G = Government, H = Homeowners, C = Contractors, D = Designers). It is also shown in Table 7 whether stakeholder groups with high levels of risk concern have taken measures for proactive risk management or not. Almost all the stakeholder groups tend to proactively mitigate risks they have more concern about. However, the majority of these proactive risk mitigation measures are considered limited and cannot mitigate these risks well. The details of proactive risk management are shown in Appendix A. Tendency of Risk Perception According to risks with high concern shown in Table 7, it is easier for the government and industry stakeholders to perceive the risks associated with their own responsibilities due to their own professional roles. Correspondingly, these risks are also the focus of their proactive risk management. As the leader and sponsor of retrofit projects, the government is mainly responsible for the organization and decision making of projects. For this reason, the government tends to take a holistic view of these risks and pays more attention to the overall enforceability of retrofit projects instead of the details concerning design and construction. As for the matters relating to design and construction, the government is more willing to depend on those professionals who keep good cooperative relationships with the government. By contrast, designers and contractors also have more concern about the factors affecting the fulfillment of their duties, like the lack of objective information or uncooperative partners. This is in line with the views of Gambatese, et al. [120], who stated that stakeholders' perception is affected by their roles and responsibilities through the project process. There is an intragroup consistency and intergroup inconsistency of risk perception due to the differences in interests and roles of stakeholder groups [121]. Note: "*" means that the difference in the concern about this risk between two stakeholder groups is significant. Note: "*" means that the difference in the concern about this risk between two stakeholder groups is significant. Unlike the above three stakeholder groups, homeowners, as the owners and end-users of projects, attach more importance to the retrofit effects, which is considered the key to safeguard their own interests (see Table 7). They focus on the improvement of building quality and appearance and thus have more concern about the risks associated with on-site construction, including whether materials and contractors are qualified, whether contractors can conduct themselves lawfully, and whether their safety can be ensured. This is different from the traditional interests of homeowners. Homeowners in the international context generally have more concern about the cost-benefit analysis to make sure that their costs can be offset by retrofit benefits (including economic benefits and non-economic benefits) due to their roles as investors [122][123][124]. By contrast, cost-recovering is not the focus of homeowners in China since they do not need to bear the costs of retrofitting. In their opinion, the decrease in costs cannot contribute to the increase in their interests, and they attach more importance to the improvement of living quality. Barriers to rRisk Perception It can be reflected in Table 2 that industry stakeholders tend to have confidence in their own professional ability, which makes it possible for the relevant risks to be ignored subjectively by these stakeholders. As the professional provider of the construction service, designers and contractors rarely question their abilities to deliver services. They, however, worry about some external risks like the lack of objective information or uncooperative partners, which concerned their familiarity with design and construction work. The current energy efficiency technologies applied to the residential energy retrofit in China are relatively traditional, and there is no significant difference in the design and construction of energy-efficient measures between new-build projects and retrofit projects. This also convinces them that their professional expertise is enough to cope with the tasks in energy retrofit projects. Indeed, familiarity with a task can result in a decrease in risk perception [125]. People's understanding of their actions lead to their optimistic views of the relevant risks, and these risks are thus considered to be under control [126]. Such low perception can, in turn, weaken the incentives for the continuous improvement of their professional abilities. In the comparisons between governments and both homeowners and designers in Table 7, governments are generally optimistic about designers' competence of making up for the shortage of technical standards (R9) and homeowners' ability of post-retrofit maintenance (R20), but such optimism is not recognized by designers and homeowners. The only technical specification for energy retrofit of residential buildings in the HSCW zone was issued in 2012 but is very difficult to be enforced in practice. The local government is more inclined to assign and complete retrofit tasks as soon as possible rather than spending much time improving the technical specification. In the opinions of the local government, retrofit schemes can be entirely dependent on designers' professional knowledge, even if there is a lack of technical guidance for the retrofit design. This was viewed by Wildavsky and Dake [71] as the individualist bias in culture theory, and it is believed that the severity of technical risks can be controlled and compensated for by technical institutions. However, designers actually complain that they do not know how to design the retrofitting schemes for old residential buildings due to the lack of specifications, so that they can only apply some necessary energy-efficient measures of new-built projects to retrofit projects, including the installation of insulation layers on roofs and exterior walls as well as the installation of windows with double glazing. It is also these limited and relatively simple retrofit measures that lead to the optimism of the local government about homeowners' performance in operation and maintenance after retrofitting. Instead, homeowners themselves are not convinced due to the lack of guidance and assistance. Conflicts of Risk Perception Based on the comparisons between homeowners and contractors in Table 7, it seems hard for them to perceive the risks posed by their own actions, especially related to opportunism. Both homeowners and contractors have opportunities during the on-site construction to adopt opportunistic behaviours. In homeowners' opinions, contractors' breaching of contracts by cutting corners has a direct negative impact on living comfort after retrofitting, but homeowners' poor cooperation and opportunistic requirements, causing project delay and cost increase, are regarded by themselves as a reasonable approach to perfecting the retrofit and building a better living environment. Xenidis and Angelides [127] and Loosemore, Raftery, Reilly and Higgon [90] viewed this as a bias resulting from contradictory interests. Similarly, for contractors, the execution of construction work requires cooperation from the homeowners, including the removal of obstacles in the pubic area, the placement of building materials, the negotiation of home-entry construction, etc. Meanwhile, faced with homeowners' unexpected demands like opportunistic compensation and unplanned retrofit requirements, contractors need to spend more time and costs on the negotiation with homeowners and the adjustment of construction schemes. However, contractors believe that they take the government projects more seriously and perform the contract strictly, and thus tend to neglect the risk arising from their opportunistic behaviours. Indeed, few people can acknowledge the relationships between their actions and the potential risks [128]. In terms of risks given high levels of concern by homeowners, it can be seen from the comparisons between homeowners and others in Table 7 that there are significant differences in the perception of some construction-related risks between homeowners and practitioners. Homeowners cast doubt on contractors' abilities and material quality as well as even the legalization of their actions. Excessive concern leads to their suspicion of whether these residential buildings can be renovated as they expected or not, which, in turn, affects their cooperation with contractors to some extent. As with the views of Ward and Chapman [22], stakeholders' approaches to risk mitigation arising from their perception of risks are likely to only focus on their benefits and thus to be detrimental to others. Influenced by risk perception, homeowners are more inclined to strengthen self-protection by making more requests for retrofitting. By contrast, the government does not view these risks as concerns, as believed by homeowners. In general, contractors are selected by the government through bidding, and such selection is also built on trust. Indeed, the differences in risk perception are related to the lack of confidence in people producing risks [129]. Moreover, project staff who feel untrusted are more likely to have moral hazard behaviours [130], which also means that contractors' opportunism originates from homeowners' mistrust to some extent. Insights from Risk Perception and TCs Considerations The decrease in homeowners' risk perception plays an important role in promoting homeowners' participation and cooperation, which also contributes to the mitigation of homeowner-associated risks. Information is essential to the adjustment of risk perception. Consumers' risk perception is dependent on product-related information collected from various sources [131], and risk perception is, in turn, also a direct predictor of information seeking [132]. Information search is one of the primary sources of TCs that are viewed to affect make-buy decisions [133]. From a TCs perspective, insufficient effective information leads homeowners to bear higher costs of information search, which is not only detrimental to the shaping of low levels of risk perception but also to their rational decision making on their involvement in energy retrofitting projects. In the Chinese context, the development of residential energy retrofit relies mostly on the government's propaganda and sponsorship. The local government is the main decision maker about the selection of projects, designers, and contractors as well as the scope of retrofit items, although homeowners' approval is still the premise of project initiation and the execution of design schemes. Few homeowners have access to sufficient project information in practice. In particular, the relationships between homeowners and other project parties are new and more temporary, which leads homeowners to have no prior knowledge of others' experience and reliability. To lower homeowners' concern about project risks, other parties should provide initiatively more positive and understandable information about material quality, the expertise of construction staff, safety guarantee, and post-retrofit maintenance. Moreover, the government and contractors should create a more transparent environment for the follow-up information on retrofit construction in order to enable homeowners to realize that their home is being improved with the help of other project parties. It is essential for other stakeholder groups to enhance their risk perception and to improve the feasibility and effectiveness of proactive risk management measures. In consideration of the tendency of stakeholders' risk perception towards their own responsibilities, there is a need for all of the stakeholder groups to share the risks posed by them in order to trigger their awareness of proactively mitigating these risks. For example, industry stakeholders are required to enhance the technical knowledge to ensure the quality of their service; the government should assist in the development of technical guidance of energy retrofit and the establishment of systematic post-retrofit maintenance. Indeed, risk allocation is viewed as an approach to responsibility-sharing and has high impacts on stakeholders' behavioural motivations [22]. However, risk allocation requires the investment of resources, which is also likely to limit stakeholders' actions for risk mitigation. Economic condition is considered as one of the leading causes of the weak relationships between risk perception and stakeholders' actions [134]. Both uncertainties in the environment of proactive risk management and asset specificity concerning stakeholders' own abilities and resources give rise to higher TCs in the risk management transaction, which further restricts their behaviours in proactive risk management. There is no need for each stakeholder group to be involved in proactive management of all risks that are relevant to them. For instance, although homeowners and contractors need to be jointly responsible for on-site construction safety, it seems that contractors, owning experience and professional knowledge, can undertake more extra work with lower searching costs and monitoring costs to prevent safety issues. TCs incurred by proactive risk mitigation (e.g., searching costs, learning costs, negotiation costs, monitoring costs, etc.) should be considered in the risk allocation of energy retrofit projects in order to make sure that risk mitigation behaviours of risk-takers can be carried out successfully and effectively. Conclusions Energy retrofits of residential buildings in China are exposed to many risks due to the involvement of various stakeholders. Proactive risk management is a more functional approach to project success and can help economize on TCs by controlling uncertainty to smoothen the whole process of energy retrofit projects. Stakeholders' proactive actions for risk mitigation are based on their perception of these risks. Perceived risks are different from real risks, and contradictions of risk perception among different stakeholder groups can also result in the conflicts of risk mitigation practices. In order to motivate stakeholders' proactive management for real risks, it is essential to have a good understanding of stakeholders' present risk perception. This paper analysed and compared the perception of four main stakeholder groups of 21 risks (identified from a literature review and interviews) in residential energy retrofit projects in the form of risk concern. The proactive measures of different stakeholder groups for risk mitigation were also explored through interviews to validate the relationships between high levels of risk perception and proactive risk management. Responsibilities and interests are the focus of stakeholders' risk perception, and high levels of risk perception can drive people to take proactive measures for risk management. The risk perception of government and industry stakeholders generally originate from their sense of duty as the project organizer and service providers, while homeowners tend to view their interests as a base of risk perception. Correspondingly, all the stakeholder groups are active in proactive mitigation for these risks. However, influenced by individuals' knowledge and external environment factors, the effectiveness of some proactive measures is not enough. Homeowners cannot do much about the risks relevant to professional knowledge (e.g., skills and work normativity of construction staff, quality of materials and buildings, and building maintenance). Designers have limited roles in the operational normativity of construction staff and the making up of the deficiency of some external information. By contrast, in terms of the risks concerning the coordination and support from other groups, proactive measures of the government are limited. Likewise, the contractors do not have sufficiently effective measures to proactively mitigate the risks arising from homeowners' cooperation. It is essential for proactive risk management to enhance the risk perception of risk-source-related stakeholder groups in consideration of their more effective proactive measures compared to the affected groups. Stakeholders related to risk sources should share the risk, and their increased responsibilities can motivate them to enhance their awareness of proactive risk management. The government and contractors need to take more responsibilities for construction quality and maintenance. The government should set more explicit standards for the selection of retrofitting projects, construction materials, and contractors. Meanwhile, it is necessary to clarify contractors' responsibilities with respect to the procurement of materials, personnel abilities, service normativity, and post-retrofitting quality warranty. Furthermore, the government also needs to shoulder some responsibilities for design work, including not only developing more specific design standards but also assisting designers to probe deeper into buildings and the surroundings. TCs have an important role in both the enhancement of risk perception and responsibility-sharing. Risk allocation with TCs considerations can make responsibility-sharing more reasonable and effective and further drive the achievement of stakeholders' proactive risk management. Homeowners' proactive measures also need to be encouraged, which can be achieved through changes (including both enhancement and reduction) in their risk perception. The key to managing the homeowner-associated risks lies with the enhancement of their self-awareness of active cooperation. Responsibility sharing (e.g., encouraging homeowners to bear some of retrofitting costs) contributes to reducing the barriers from homeowners during the construction period. Meanwhile, the decrease in homeowners' perception of risks caused by other stakeholder groups is also necessary to motivate homeowners' cooperative awareness. Sufficient and effective information should be provided to reduce homeowners' risk perception, which is also an approach to lowering TCs borne by homeowners and to further improve their initiative of participation and cooperation. Conflicts of Interest: The authors declare no conflict of interest. "There is a criterion for the selection of renovation projects. These old residential buildings cannot be demolished in the next five years. We consult departments of urban construction and housing construction about the demolition scope. It is better to make sure that these buildings renovated can continue to be used for over ten years . . . " Appendix A R3: Lack of awareness of energy efficiency retrofitting Government Yes "During the project set-up, we provided some information about energy retrofit for residents to enable them to have an understanding of retrofit. It is necessary to communicate with residents, which is the responsibility of the neighbourhood committee and the subdistrict office. During the dissemination of information, we also need to focus on those who are reluctant and indecisive . . . " R4: Lack of government departments' coordination and support Government Yes but limited "Actually, we (the Department of Housing and Urban-Rural Development) are mainly responsible for building renovation, but renovation is also related to water, electricity, and gas that should be handled under the responsibility of other departments. These departments did not actively cooperate with us. In general, if we cannot gain the cooperation of these other departments, we would ask heads of the municipal government and district for help . . . " "Before the implementation of retrofit, we had a workshop and all involved departments are required to attend. We needed to show the construction drawings to these departments to make sure that they can know about the potential impacts of retrofit on water, electricity, and gas . . . " "Prior notice was given on the drawings to highlight the potential obstructs (e.g., cable and gas pipelines on the external walls) during the construction. We also pointed out the possible deviations between design drawings and the on-site practical situations (e.g., hidden pipelines) so that the government could do preparations in advance . . . " R9: Lack of appropriate technical standards Designers Yes but limited "Actually, we also do not know how to design the retrofitting schemes for old residential buildings due to the lack of requirements so that we can only regard these old buildings as newly built buildings . . . " "For residential buildings, no matter newly built buildings or old buildings, we used the same energy-saving design software for modeling and calculation. We viewed the old residential buildings as newly built buildings. For example, when designing the insulation of exterior walls, we need to suppose first the original wall surface to be eradicated. Then we redesign the insulation layer and the surface . . . " R10: Unqualified building materials Homeowners Yes but limited "We could only observe the materials and touch the surface to make a judgment. We also did not know if these materials are safe and nontoxic. What we knew were dependent on what the contractors said. When having doubts about some materials, we still put forward our opinions and suggested contractors change it." "We also asked what materials were used, and they told us something about the materials. However, we generally still knew little about it so that we could not be too serious about them." R12: Lack of construction skills Homeowners No "We knew little about retrofit construction and these construction workers. We are the laypeople, so we could not think about it too much. We could not also do something about it . . . " R13: Moral hazard Homeowners Yes but limited "During the construction, we could supervise them, and many people crowded around to watch them. Sometimes I also talked with them about mortar and concrete mixing. They told us that these materials were tested. If noticing that they cut corners on retrofit construction, we could call the mayor's hotline for complaints. However, it was still hard for us to supervise them because we did not have professional knowledge." "In terms of construction, there is a set of supervision systems, including supervisors, acceptance inspection, and a two-year warranty. We also arranged some people to do field supervision. In general, we went to the construction site every two days, and there was a regular supervision meeting every four days. Homeowners could also feed some problems back to the neighbourhood community, and the neighbourhood community fed these problems back to us. Moreover, the supervision company was selected in the bidding process. Municipal and district departments of quality inspection were also involved in the supervision process. If they found out some problems, they would punish the relevant construction staff, which also could lead these companies to have a bad record . . . " Designers Yes but limited "During the construction, we went to do the on-site supervision to check if the construction was conducted according to our design requirements, such as the fixation of insulation walls, external walls, and the original walls. However, it was also impossible for us to be always there for follow-up supervision . . . " R14: Poor quality of old residential buildings themselves Homeowners Yes but limited "Before the implementation of retrofit, some professionals came here for quality inspection. During this period, we also asked some questions about building quality to them. Beyond that, we could do nothing. After all, we were the laypeople and could not do something about it." R16: Poor safety management Homeowners Yes "We paid more attention to our own safety, and also remind others to be careful. For example, when the construction workers were building the scaffold, we reminded the neighbours who were passing by the scaffold to take care." Contractors Yes "We erected some barriers around the construction site to prevent residents from approaching the dangerous areas. We also kept up with garbage collection, and always reminded residents to pay attention to their safety. In terms of our own safety, we always supervise the construction workers to wear safety helmets and to fasten safety belts." R17: Poor performance in cooperation Government Yes "During the design process, we did the field survey in order to respect the will of people. If we did it very well, it would be possible to make the conflicts during the construction less and to make the alteration less." "The demolishment of illegal constructions was based on the aerials photos in both 1982 and 1996. These photos could be the evidence to enable us to require homeowners to cooperate on the removal of illegal constructions." "We had some rules and regulars for the prevention of conflicts. For example, we had a mechanism for on-site compromise, and someone was put in charge of some conflicts. There was a billboard on which homeowners could be informed of retrofit contents, parties participating in retrofit projects, the personnel in charge of quality control, and design schemes." Contractors Yes but limited "We usually informed homeowners of construction conditions to encourage their cooperation. We paid more attention to our own attitudes and language when communicating with homeowners in order to avoid an unnecessary quarrel. We also asked the subdistrict office and neighbourhood community for coordination." R18: Opportunistic renegotiation Government Yes but limited "In the previous projects, we experienced many unnecessary financial losses. For example, before renovating the roof, we had to remove solar water heaters on it. However, many homeowners used the damage of their heaters as an excuse to ask us for the compensation, and we also did not know whether these heaters had been broken before removing them. As a result, we did view the roof renovation as the universal retrofit items. If there was a need to renovate the roof, we first required good coordination among neighbours." Contractors Yes but limited "We tried our best to prevent unnecessary contacting with homeowners' personnel items. Moreover, before the construction, we needed to check if homeowners' solar water heaters have been broken." R20: Inadequate maintenance Homeowners Yes but limited "We generally supervise each other and cherish and protect our own home." R21:Difficulties in post-retrofit repair Government No "The government promotes building energy efficiency, so there are the insulating layers on the exterior wall of the buildings constructed in recent years. However, the insulating layers broke apart from some residential buildings only a few years after they were built. At present, the technology of the external thermal insulation wall is not mature in our country . . . " Homeowners No "We are not the professionals and know little about it. We cannot do anything, in addition, to rely on government and construction workers."
12,639.2
2020-04-02T00:00:00.000
[ "Environmental Science", "Engineering", "Business" ]
Dual-Polarized Multi-Beam Fixed-Frequency Beam Scanning Leaky-Wave Antenna A fixed-frequency beam-scanning leaky-wave antenna (LWA) array with three switchable dual-polarized beams is proposed and experimentally demonstrated. The proposed LWA array consists of three groups of spoof surface plasmon polaritons (SPPs) LWAs with different modulation period lengths and a control circuit. Each group of SPPs LWAs can independently control the beam steering at a fixed frequency by loading varactor diodes. The proposed antenna can be configured in both multi-beam mode and single-beam mode, where the multi-beam mode with optional two or three dual-polarized beams. The beam width can be flexibly adjusted from narrow to wide by switching between multi-beam and single-beam states. The prototype of the proposed LWA array is fabricated and measured, and both simulation and experimental results show that the antenna can accomplish a fixed frequency beam scanning at an operating frequency of 3.3 to 3.8 GHz with a maximum scanning range of about 35° in multi-beam mode and about 55° in single-beam mode. It could be a promising candidate for application in the space–air–ground integrated network scenario in satellite communication and future 6G communication systems. Introduction The space-air-ground integrated network (SAGIN) is a crucial component of the 6G network, which is widely recognized as the future of wireless communication systems [1]. Leaky-wave antennas (LWAs) offer excellent beam scanning capabilities, along with the advantages of being low-profile, cost-effective, and low in power consumption. These characteristics make LWAs an attractive option for use as a relay terminal antenna for SAGIN applications, facilitating communication between users and satellites. Generally speaking, LWAs are classified into two categories depending on their operating forms: uniform and periodic. Uniform LWAs are limited to forward beam scanning [2][3][4]. Periodic LWAs can complete backward-to-forward beam scanning [5,6]. However, periodic LWAs suffer from the open-stop-band (OSB) issue that the gain decreases at broadside radiation. Some methods to suppress the open-stop-band have been proposed, including loading matched stubs along the radiation direction and designing LWAs based on the balance condition of composite right/left-handed transmission line (CRLH-TL) [7][8][9][10]. To fully exploit spectrum resources, beam scanning at a fixed frequency has been successively achieved based on LWAs using multiple ways [11][12][13][14][15]. In [11], a beam-scanning LWA that can operate from 1 • to 23 • at 9.8 GHz was designed using a binary programmable metasurface, where a PIN diode works as a binary switch. In [12], Wang et al. proposed a sinusoidal impedance modulation-based spoof surface plasmon polaritons (SPPs) LWA loaded with varactor diodes, where a maximum scanning angle of about 45 • at a fixed frequency from 5.5 to 5.8 GHz has been achieved. A triangular impedance modulationbased SPPs LWA was proposed to achieve beam scanning at fixed frequencies in the dual bands of 4 to 4.5 GHz and 5.75 to 7.25 GHz [13]. In [14], a CRLH-based LWA is proposed, whose beam scan range can reach 69 • at 5 GHz. In [15], a CRLH-based LWA using ON/OFF control of PIN diode for surface impedance variation is proposed, whose beam scan angle can reach 50 • at 2.45 GHz. Nevertheless, previously reported LWAs operate in a single polarization. Recently, we proposed a dual-polarized fixed-frequency beam scanning LWA, but it can only form a single beam, and the scanning angle is limited [16]. In this letter, to implement dual-polarized LWAs with multi-beam capabilities for a practical mobile communication system, we propose a dual-polarized multi-beam LWA, as shown in Figure 1. The proposed LWA array consists of three groups of SPPs LWAs and a DC control circuit, each group of LWAs is loaded with varactor diodes and has different modulation period lengths. Thanks to the periodic modulation of the surface impedance, guided waves can be effectively transformed into leaky-wave radiations that possess frequency-scanning properties. Additionally, the surface impedance of the LWA can be reconfigured by adjusting the capacitance of the varactor diode through DC bias voltage, resulting in radiation beam steering across a broad angle range at a fixed frequency. Through separate and simultaneous feeding of the antenna ports, the capability to switch between narrow multi-beam and wide single-beam states is achieved. Both simulation and experimental results indicate that the antenna is capable of dual-polarized fixedfrequency beam scanning at frequencies ranging from 3.3 to 3.8 GHz, with scanning ranges of approximately 35 • and 55 • for multi-beam and single-beam operating states, respectively. The peak gain of the proposed antenna is 12.1 dBi, and the radiation efficiency is more than 60%. The antenna can be applied in SAGIN systems to achieve beam coverage over a large area, improving spectrum utilization and providing higher effective isotropic radiated power (EIRP) values for the service area. Sensors 2023, 23, x FOR PEER REVIEW 2 of 12 a binary programmable metasurface, where a PIN diode works as a binary switch. In [12], Wang et al. proposed a sinusoidal impedance modulation-based spoof surface plasmon polaritons (SPPs) LWA loaded with varactor diodes, where a maximum scanning angle of about 45° at a fixed frequency from 5.5 to 5.8 GHz has been achieved. A triangular impedance modulation-based SPPs LWA was proposed to achieve beam scanning at fixed frequencies in the dual bands of 4 to 4.5 GHz and 5.75 to 7.25 GHz [13]. In [14], a CRLHbased LWA is proposed, whose beam scan range can reach 69° at 5 GHz. In [15], a CRLHbased LWA using ON/OFF control of PIN diode for surface impedance variation is proposed, whose beam scan angle can reach 50° at 2.45 GHz. Nevertheless, previously reported LWAs operate in a single polarization. Recently, we proposed a dual-polarized fixed-frequency beam scanning LWA, but it can only form a single beam, and the scanning angle is limited [16]. In this letter, to implement dual-polarized LWAs with multi-beam capabilities for a practical mobile communication system, we propose a dual-polarized multi-beam LWA, as shown in Figure 1. The proposed LWA array consists of three groups of SPPs LWAs and a DC control circuit, each group of LWAs is loaded with varactor diodes and has different modulation period lengths. Thanks to the periodic modulation of the surface impedance, guided waves can be effectively transformed into leaky-wave radiations that possess frequency-scanning properties. Additionally, the surface impedance of the LWA can be reconfigured by adjusting the capacitance of the varactor diode through DC bias voltage, resulting in radiation beam steering across a broad angle range at a fixed frequency. Through separate and simultaneous feeding of the antenna ports, the capability to switch between narrow multi-beam and wide single-beam states is achieved. Both simulation and experimental results indicate that the antenna is capable of dual-polarized fixed-frequency beam scanning at frequencies ranging from 3.3 to 3.8 GHz, with scanning ranges of approximately 35° and 55° for multi-beam and single-beam operating states, respectively. The peak gain of the proposed antenna is 12.1 dBi, and the radiation efficiency is more than 60%. The antenna can be applied in SAGIN systems to achieve beam coverage over a large area, improving spectrum utilization and providing higher effective isotropic radiated power (EIRP) values for the service area. The numbers 1-12 represent the port numbers used for feeding the antenna array). Figure 1 shows the proposed dual-polarized multi-beam fixed-frequency beam scanning LWA, which is composed of six SPPs transmission lines with impedance modulation and a DC control circuit. The DC control circuit of the varactor diode is placed on the right side of the antenna array. Each parallel routing DC bias line is loaded with an inductor to separate the DC path from the RF path and avoid mutual interference. The end of the bias line is connected to the ground plane through metal vias. The entire LWA array contains three groups of dual-polarized antennas. The first group of dual-polarized antennas contains six modulation units per modulation period, the second group contains eight modulation units, and the third group contains ten modulation units. They are labeled as antennas 1st, 2nd, 3rd, 4th, 5th, and 6th from top to bottom. Antennas 1 and 5 employ the varactor diode SMV2202-040LF, with a capacitance range of 0.31 pF to 3.14 pF. Antenna 3 utilizes the varactor diode SMV2203-040LF, with a capacitance range of 0.44 pF to 4.71 pF. Their bias voltage varies from 0 to 20 V. Antennas 2, 4, and 6 utilize the varactor diode MAVR-000120-1141, with a capacitance range of 0.14 pF to 1.1 pF within a voltage variation range of 0 to 12 V. Ports 1, 3, 5, 7, 9, and 11 are feeding ports, and ports 2, 4, 6, 8, 10, and 12 are connected to matched loads to absorb the remaining electromagnetic energy. The LWA array is arranged in an X-Y-X-Y-X-Y polarization to maximize space utilization and reduce crosstalk between antennas. The antenna is designed on an F4B substrate with a thickness of 3 mm, ε r = 3.5, and tanδ = 0.001. Methods The dimensions of each part are indicated in Table 1. Assuming that the leaky wave direction is along the transmission line, the surface impedance Z s can be described as where X s represents the average reactance of the surface, M represents the modulation factor, and p represents the modulation period. In this design, the SPPs transmission line structure is used as the modulation unit, and the surface impedance is modulated by loading the varactor diode device and varying the notch depth to obtain the beam steering at a fixed frequency. The radiation schematic of the periodic LWA is shown in Figure 2. Unlike resonant antenna, LWA is a kind of traveling wave antenna. The electromagnetic energy will continuously leak into the free space, and the unradiated energy is absorbed by the matching load. It has demonstrated that sinusoidal impedance surface modulation can efficiently convert non-radiative traveling waves into radiative leaky waves [17]. When p is determined, M is also determined. The fundamental mode of the SPPs antenna is slow wave mode. For −1th harmonic radiation, the main beam angle can be approximately calculated as In which X = X s /η 0 is the average surface reactance, η 0 is the free-space wave impedance, and k 0 is the free-space wave number. By changing p, the LWA can obtain beams with different radiation angles. It has demonstrated that sinusoidal impedance surface modulation can efficiently convert non-radiative traveling waves into radiative leaky waves [17]. When p is determined, M is also determined. The fundamental mode of the SPPs antenna is slow wave mode. For −1th harmonic radiation, the main beam angle can be approximately calculated as In which ′ = / 0 is the average surface reactance, 0 is the free-space wave impedance, and 0 is the free-space wave number. By changing p, the LWA can obtain beams with different radiation angles. The wave number in the direction of propagation along the SPPs transmission line can be expressed as Similarly, the leaky-wave antenna radiation angle can be described as To implement dual polarization, two different impedance surfaces are built on a single-sided comb structure. One is the impedance surface with different notch depths, and the other is the impedance surface with the same notch depth but using devices with different capacitance values. It is noteworthy that the electric field direction excited by the two aforementioned impedance surfaces is perpendicular to each other, which is the fundamental idea behind our design of a dual-polarized fixed-frequency beam scanning LWA. The polarization characteristics are verified by observing the electric field distribution and cross-polarization patterns. As depicted in Figure 3, it can be observed that the electric field of the LWA with different notch depths varies linearly along the x direction that is perpendicular to the notch direction, while the electric field of the LWA with the same notch depth varies linearly along the y direction that is perpendicular to the capacitance slot direction. Furthermore, Slip symmetric branching has been utilized in the antenna array to enable flexible beam modulation and weaken the OSB effect that is typically present in periodic leaky antennas. The wave number k x in the direction of propagation along the SPPs transmission line can be expressed as Similarly, the leaky-wave antenna radiation angle θ can be described as To implement dual polarization, two different impedance surfaces are built on a single-sided comb structure. One is the impedance surface with different notch depths, and the other is the impedance surface with the same notch depth but using devices with different capacitance values. It is noteworthy that the electric field direction excited by the two aforementioned impedance surfaces is perpendicular to each other, which is the fundamental idea behind our design of a dual-polarized fixed-frequency beam scanning LWA. The polarization characteristics are verified by observing the electric field distribution and cross-polarization patterns. As depicted in Figure 3, it can be observed that the electric field of the LWA with different notch depths varies linearly along the x direction that is perpendicular to the notch direction, while the electric field of the LWA with the same notch depth varies linearly along the y direction that is perpendicular to the capacitance slot direction. Furthermore, Slip symmetric branching has been utilized in the antenna array to enable flexible beam modulation and weaken the OSB effect that is typically present in periodic leaky antennas. To implement multibeam, the principle is based on Equation (1), which contains three modulation parameters, M, p, and Zs, respectively. The period length p and the modulation factor M are changed by varying the number of cells contained in each modulation period. By tuning the varactor, different Zs can be obtained at a fixed frequency. Taking the first group of antennas as an example, the X-polarized SPPs antenna has six subwavelength cells in one modulation period. The periodic modulation caused by different depths of the slot cause the radiation of high-order harmonics. The Y-polarized SPPs antenna has six subwavelength cells in one modulation period, and its radiation principle is that the periodic alternate loading of varactor diodes and fixed capacitors causes the change of surface impedance, thus causing the radiation of high-order harmonics. To implement multibeam, the principle is based on Equation (1), which contains three modulation parameters, M, p, and Zs, respectively. The period length p and the modulation factor M are changed by varying the number of cells contained in each modulation period. By tuning the varactor, different Zs can be obtained at a fixed frequency. Taking the first group of antennas as an example, the X-polarized SPPs antenna has six subwavelength cells in one modulation period. The periodic modulation caused by different depths of the slot cause the radiation of high-order harmonics. The Y-polarized SPPs antenna has six subwavelength cells in one modulation period, and its radiation principle is that the periodic alternate loading of varactor diodes and fixed capacitors causes the change of surface impedance, thus causing the radiation of high-order harmonics. The tool used for antenna simulation is CST MICROWAVE STUDIO, in which the eigenmode solver is used for the dispersion curves simulation, and the time domain solver is used for the S-parameters, gain, and radiation patterns simulation. For the unit of SPPs LWA, the most important performance is its dispersion characteristics. The dispersion curves of the X-polarized antenna unit and Y-polarized antenna unit are shown in Figure 4a,b, respectively, and it can be seen that when the slot depth of the X-polarized antenna unit increases, the cut-off frequency of the unit decreases. The depth of the slot of the Ypolarized antenna unit remains unchanged, and its dispersion curve varies with the loading capacitance. According to the dispersion curve of the antenna unit, the antenna can be designed to operate at a particular frequency. To complete the beam scanning at a fixed frequency, the wave number of propagation direction can be tuned by impedance modulation. The dispersion curves for modulation periods of the antenna are presented in Figure 4c,d. The modulation period of the X-polarized antenna has dispersion curves shown in Figure 4c. The modulation period of the Y-polarized antenna can be explained by the triangular impedance modulation evolved from the sinusoidal impedance modulation. Its dispersion curves are presented in Figure 4d. From these results, it can be seen that beam scanning at a fixed frequency can be implemented by changing the capacitance of the loaded varactor diode. The tool used for antenna simulation is CST MICROWAVE STUDIO, in which the eigenmode solver is used for the dispersion curves simulation, and the time domain solver is used for the S-parameters, gain, and radiation patterns simulation. For the unit of SPPs LWA, the most important performance is its dispersion characteristics. The dispersion curves of the X-polarized antenna unit and Y-polarized antenna unit are shown in Figure 4a,b, respectively, and it can be seen that when the slot depth of the X-polarized antenna unit increases, the cut-off frequency of the unit decreases. The depth of the slot of the Y-polarized antenna unit remains unchanged, and its dispersion curve varies with the loading capacitance. According to the dispersion curve of the antenna unit, the antenna can be designed to operate at a particular frequency. To complete the beam scanning at a fixed frequency, the wave number k x of propagation direction can be tuned by impedance modulation. The dispersion curves for modulation periods of the antenna are presented in Figure 4c,d. The modulation period of the X-polarized antenna has dispersion curves shown in Figure 4c. The modulation period of the Y-polarized antenna can be explained by the triangular impedance modulation evolved from the sinusoidal impedance modulation. Its dispersion curves are presented in Figure 4d. From these results, it can be seen that beam scanning at a fixed frequency can be implemented by changing the capacitance of the loaded varactor diode. We have simulated the S-parameters and far-field radiation patterns of the LWA. The simulated reflection coefficients of the X-polarized and Y-polarized antennas are presented in Figure 5. It can be observed that the reflection coefficients of all six antennas in the operating frequency band of 3.3 to 3.8 GHz are consistently below −10 dB. The simulated radiation patterns of the multi-beam antenna at 3.5 GHz have been presented in Figure 6, and the radiation angles of each group of x-polarized and y-polarized antennas remain identical. Figure 6a,b show the 3-D radiation patterns of the antenna. Figure 6c,d depict the radiation patterns of the X-polarized antenna with small and large capacitances, respectively. Where the large capacitance state is the capacitance of the varactor diode is 3.1 pF, the small capacitance state is the capacitance of the varactor diode is 0.5 pF. It is observed that the X-polarized antenna has a gain range of 9 to 11.2 dBi and a 3 dB beamwidth range of 8 • to 11 • with small capacitance, while the X-polarized antenna has a gain range of 11.3 to 13 dBi and a 3 dB beamwidth range of 8 • to 10.5 • with large capacitance. The maximum gain change during scanning is 2.3 dB. Similarly, Figure 6e,f illustrate the radiation patterns of Y-polarized antennas with small and large capacitances, respectively. Where the large capacitance state is the capacitance of the varactor diode is 1.1 pF, the small capacitance state is the capacitance of the varactor diode is 0.2 pF. The gain range of Y-polarized antennas under small capacitance is 11.2 to 11.6 dBi, and the 3 dB beamwidth range is 11 • to 14 • . Meanwhile, the gain range of X-polarized antennas under large capacitance is 10.3 to 12.4 dBi, and the 3dB beamwidth range is 13 • to 18 • . The maximum gain change during scanning is 1.3 dB. We have simulated the S-parameters and far-field radiation patterns of the LWA. The simulated reflection coefficients of the X-polarized and Y-polarized antennas are presented in Figure 5. It can be observed that the reflection coefficients of all six antennas in the operating frequency band of 3.3 to 3.8 GHz are consistently below −10 dB. The simulated radiation patterns of the multi-beam antenna at 3.5 GHz have been presented in Figure 6, and the radiation angles of each group of x-polarized and y-polarized antennas remain identical. Figure 6a,b show the 3-D radiation patterns of the antenna. Figure 6c,d depict the radiation patterns of the X-polarized antenna with small and We have simulated the S-parameters and far-field radiation patterns of the LWA. The simulated reflection coefficients of the X-polarized and Y-polarized antennas are presented in Figure 5. It can be observed that the reflection coefficients of all six antennas in the operating frequency band of 3.3 to 3.8 GHz are consistently below −10 dB. The simulated radiation patterns of the multi-beam antenna at 3.5 GHz have been presented in Figure 6, and the radiation angles of each group of x-polarized and y-polarized antennas remain identical. Figure 6a,b show the 3-D radiation patterns of the antenna. Figure 6c,d depict the radiation patterns of the X-polarized antenna with small and illustrate the radiation patterns of Y-polarized antennas with small and large capacitances, respectively. Where the large capacitance state is the capacitance of the varactor diode is 1.1 pF, the small capacitance state is the capacitance of the varactor diode is 0.2 pF. The gain range of Y-polarized antennas under small capacitance is 11.2 to 11.6 dBi, and the 3 dB beamwidth range is 11° to 14°. Meanwhile, the gain range of X-polarized antennas under large capacitance is 10.3 to 12.4 dBi, and the 3dB beamwidth range is 13° to 18°. The maximum gain change during scanning is 1.3 dB. Results and Discussion To verify the design, the antenna system prototype shown in Figure 7 is fabricated for the actual measurement. The prototype comprises three main components: a beam control board, a USB2ANY module, and the proposed LWA array. The host computer connects to the beam control board via the USB2ANY, while the six voltage output channels of the board are connected to the metal vias of the DC bias lines of the LWA array through Dupont wires. By using specialized control software, the beam control board can be commanded to adjust the voltage loaded on both sides of the varactor diode, thereby enabling flexible beam regulation. The results are presented in Figure 8, and it can be seen that the S-parameters perform well in the range of 3.3 to 3.8 GHz. Specifically, the reflection coefficients of X-polarized and Y-polarized antennas are below −10 dB in the operating frequency band. nels of the board are connected to the metal vias of the DC bias lines of the LWA array through Dupont wires. By using specialized control software, the beam control board can be commanded to adjust the voltage loaded on both sides of the varactor diode, thereby enabling flexible beam regulation. The results are presented in Figure 8, and it can be seen that the S-parameters perform well in the range of 3.3 to 3.8 GHz. Specifically, the reflection coefficients of X-polarized and Y-polarized antennas are below −10 dB in the operating frequency band. nels of the board are connected to the metal vias of the DC bias lines of the LWA array through Dupont wires. By using specialized control software, the beam control board can be commanded to adjust the voltage loaded on both sides of the varactor diode, thereby enabling flexible beam regulation. The results are presented in Figure 8, and it can be seen that the S-parameters perform well in the range of 3.3 to 3.8 GHz. Specifically, the reflection coefficients of X-polarized and Y-polarized antennas are below −10 dB in the operating frequency band. The beam sweep performance of the antenna at a fixed frequency is indicated in Figure 9. The LWA can provide three dual-polarized beams. Antenna 1 and antenna 2 are the same set of dual-polarized fixed frequency scanning antennas with a common scanning area of −16° to 19° at 3.5 GHz. Antenna 3 and antenna 4 have a common scan area of −6° to 29° at 3.5 GHz. From the results of the measured radiation patterns, although the maximum beam scan range of antenna 1 in the multi-beam state can cover −28° to 19°, considering the comprehensive dual-polarized performance, each group of dual-polarized antenna common beam scan range is about 35°. Taking antenna 3 as an example to analyze The beam sweep performance of the antenna at a fixed frequency is indicated in Figure 9. The LWA can provide three dual-polarized beams. Antenna 1 and antenna 2 are the same set of dual-polarized fixed frequency scanning antennas with a common scanning area of −16 • to 19 • at 3.5 GHz. Antenna 3 and antenna 4 have a common scan area of −6 • to 29 • at 3.5 GHz. From the results of the measured radiation patterns, although the maximum beam scan range of antenna 1 in the multi-beam state can cover −28 • to 19 • , considering the comprehensive dual-polarized performance, each group of dual-polarized antenna common beam scan range is about 35 • . Taking antenna 3 as an example to analyze the test results, when the bias voltage range varies from 0 V to 12.6 V, the capacitance of the varactor diode changes from 4.7 pF to 0.6 pF, and the antenna scans from 29 • to −4 • . The side beam level of the antenna is more than 10 dB lower than the main beam level during the entire scan. The differences between the measurement results and the simulation results may be due to the different parasitic parameters generated by the varactor diode when high and low voltages are applied, which cannot be accurately simulated by the simulation software. Next, we will investigate the single-beam mode and the multi-beam mode. The measured radiation patterns are displayed in Figure 10 when the different feeding ports of the antenna array are excited. The antenna array can produce multiple directional radiation beams when fed individually from different ports, which is a multi-beam operation. The X-polarized antenna gain ranges from 9.7 to 9.9 dBi with a 3 dB beam width of about 9 • , while the Y-polarized antenna gain ranges from 11.3 to 11.7 dBi with a 3 dB beam width of about 11 • . When multiple ports are fed simultaneously, the antenna array can generate a wider beam, which is the single-beam working state. The X-polarized antennas have a beam width of about 31 • , and the Y-polarized antennas have a beam width of about 32 • in the single-beam working state. The scanning range of the single beam can be flexibly adjusted by varying the voltage value of the varactor diode on the antenna, allowing for a total scanning range of approximately 55 • , with the ability to scan from −27 • to 28 • . Sensors 2023, 23, x FOR PEER REVIEW 9 of 12 the test results, when the bias voltage range varies from 0 V to 12.6 V, the capacitance of the varactor diode changes from 4.7 pF to 0.6 pF, and the antenna scans from 29° to −4°. The side beam level of the antenna is more than 10 dB lower than the main beam level during the entire scan. The differences between the measurement results and the simulation results may be due to the different parasitic parameters generated by the varactor diode when high and low voltages are applied, which cannot be accurately simulated by the simulation software. Next, we will investigate the single-beam mode and the multi-beam mode. The measured radiation patterns are displayed in Figure 10 when the different feeding ports of the antenna array are excited. The antenna array can produce multiple directional radiation beams when fed individually from different ports, which is a multi-beam operation. The X-polarized antenna gain ranges from 9.7 to 9.9 dBi with a 3 dB beam width of about 9°, while the Y-polarized antenna gain ranges from 11.3 to 11.7 dBi with a 3 dB beam width of about 11°. When multiple ports are fed simultaneously, the antenna array can generate a wider beam, which is the single-beam working state. The X-polarized antennas have a beam width of about 31°, and the Y-polarized antennas have a beam width of about 32° in the single-beam working state. The scanning range of the single beam can be flexibly adjusted by varying the voltage value of the varactor diode on the antenna, allowing for a total scanning range of approximately 55°, with the ability to scan from −27° to 28°. Next, we will investigate the single-beam mode and the multi-beam mode. The measured radiation patterns are displayed in Figure 10 when the different feeding ports of the antenna array are excited. The antenna array can produce multiple directional radiation beams when fed individually from different ports, which is a multi-beam operation. The X-polarized antenna gain ranges from 9.7 to 9.9 dBi with a 3 dB beam width of about 9°, while the Y-polarized antenna gain ranges from 11.3 to 11.7 dBi with a 3 dB beam width of about 11°. When multiple ports are fed simultaneously, the antenna array can generate a wider beam, which is the single-beam working state. The X-polarized antennas have a beam width of about 31°, and the Y-polarized antennas have a beam width of about 32° in the single-beam working state. The scanning range of the single beam can be flexibly adjusted by varying the voltage value of the varactor diode on the antenna, allowing for a total scanning range of approximately 55°, with the ability to scan from −27° to 28°. Because the parasitic resistance of the varactor diode has a large impact on the radiation efficiency of the antenna, so we take the first group of the dual-polarized antenna as an example and change the parasitic resistance value to observe the corresponding variation in radiation efficiency; the results are displayed in Figure 11. Without considering the effect of device parasitic resistance, the radiation efficiency of the dual-polarized antenna is higher than 75%. The parasitic resistance of the device will cause the radiation efficiency of the active antenna to decrease. It can be seen that Y-polarized antennas are less affected by the parasitic resistance because the number of loaded varactor diodes is less, so when the parasitic resistance of the device is 2 Ω, the radiation efficiency of more than 56% can still be achieved in the operating frequency band. However, the X-polarized antennas suffer from a higher Ohmic loss during simulation, which is because the X-polarized antenna uses a larger number of active devices. The measurement results show that the actual radiation efficiency of the dual-polarized antenna exceeds 60% in the frequency range of 3.3 to 3.8 GHz. by the parasitic resistance because the number of loaded varactor diodes is less, so when the parasitic resistance of the device is 2 Ω, the radiation efficiency of more than 56% can still be achieved in the operating frequency band. However, the X-polarized antennas suffer from a higher Ohmic loss during simulation, which is because the X-polarized antenna uses a larger number of active devices. The measurement results show that the actual radiation efficiency of the dual-polarized antenna exceeds 60% in the frequency range of 3.3 to 3.8 GHz. Table 2 provides a comparison of performances between the proposed dual-polarized multi-beam fixed-frequency beam scanning LWA and the previously reported LWAs. In Table 2, 0 is the wavelength at the center frequency of the working band for LWAs. Although these previous works have achieved fixed-frequency beam scanning, most of them are single-polarized and can only form a single beam. The proposed antenna is dual-polarized and can form multiple beams, while the beam width can be flexibly adjusted by switching between multi-beam and single-beam states. The proposed antenna also performs well in terms of gain and beam scanning range. Table 2 provides a comparison of performances between the proposed dual-polarized multi-beam fixed-frequency beam scanning LWA and the previously reported LWAs. In Table 2, λ 0 is the wavelength at the center frequency of the working band for LWAs. Although these previous works have achieved fixed-frequency beam scanning, most of them are single-polarized and can only form a single beam. The proposed antenna is dualpolarized and can form multiple beams, while the beam width can be flexibly adjusted by switching between multi-beam and single-beam states. The proposed antenna also performs well in terms of gain and beam scanning range. Conclusions An SPPs LWA array with dual-polarized, switchable, and steerable multi-beam at fixed frequency is proposed. In the frequency range of 3.3-3.8 GHz, this antenna has the capability to modify the voltage loaded on the varactor diode to achieve a maximum scanning range of 35 • for multi-beams and 55 • for single-beams. Additionally, the beam width is adjustable by switching between the multi-beam and single-beam modes. The peak gain of this antenna is 12.1 dBi, while the radiation efficiency exceeds 60%. Given these capabilities, this antenna holds great potential for use as a relay antenna in future satellite communication within the SAGIN. The antenna's measured results exhibit excellent agreement with the simulated results, validating its superior performance.
7,688.8
2023-05-25T00:00:00.000
[ "Physics" ]
Routing Protocol in VANETs Equipped with Directional Antennas : Topology-Based Neighbor Discovery and Routing Analysis In Vehicular AdHoc Networks (VANETs), directional antenna is a good solution if a longer transmission distance is needed.When vehicles are equipped with directional antennas, however, complete paths from the sources to the destinations there may not exist. Epidemic routing protocol is considered as one of the well-performed routing protocols when the networks are intermittently connected but it can cause a heavy load to the network and great energy consumption to the nodes. First in this paper, we propose a novel neighbor discovery algorithm which makes nodes be able to sense the topology changes around them and arrange their directional antennas accordingly. Secondly, we propose a routing protocol which is based on the conventional epidemic routing protocol, and nodes make their routing decisions according to the information collected during the neighbor discovery process. Experimental results show that the proposed neighbor discovery algorithm has better performance especially in the scenario where the node density is low. Moreover, the matching routing protocol can effectively reduce the load of the network and successfully deliver the packets to its destination in a reasonable short delay. Introduction Directional antennas have been widely used in wireless ad hoc networks recently.It can produce higher gain, provide greater transmission range, and improve the network spatial reuse and throughput as well.Furthermore, the directional selectivity also reduces the co-channel interference of neighboring nodes, and directional antennas bring potential performance improvement to the mobile ad hoc networks.Despite the improvements brought by the directional antennas, their deployments also bring severe challenges to the nodes such as the neighbor discovery, and the network suffers from frequent disruptions, making the topology unstable, that is, no consistent paths between each pair of nodes. Considering that, in a considerable proportion of real scenarios, the mobility of vehicles is not purely random.Vehicles show strong location and group preference.For example, for the vehicles in the earthquake relief sites, they work at the areas which are under serious destruction and these areas are not uniformly distributed.Moreover, vehicles can change their locations from one area to another.Based on this kind of scenario, we formulate the neighbor discovery algorithm and the matching routing protocol in this paper.The main contribution of this paper is about two aspects.Firstly, we design a novel neighbor discovery algorithm for intermittent networks which are equipped with directional antennas.In the proposed neighbor discovery algorithm, nodes monitor the HELLO message receiving frequency for each sector to estimate the topology variations and adjust the steering time of each sector according to the history records maintained in their neighbor node lists.Secondly, we propose a routing protocol which is derived from the proposed neighbor discovery algorithm.In the routing protocol, source node makes its routing decisions according to the status of the nodes in its two hops.The stability of neighbor relationships, the communicating probability, and the time schedule for the 2 Wireless Communications and Mobile Computing directional antennas of each participating node are all taken into consideration when the source node makes its routing decisions. The rest of the paper is organized as follows.In Section 2, we introduce the related researches about neighbor discovery algorithms and routing protocols for opportunistic networks.In Section 3, we present the system models.In Section 4, we introduce and analyze our proposed neighbor discovery algorithm.Our proposed routing protocol is demonstrated in Section 5. Section 6 provides simulation results.Finally, we conclude the paper in Section 7. Related Work 2.1.Neighbor Discovery.As we discussed in Section 1, neighbor discovery is the key initial step for establishing the connections among nodes in intermittent networks.In recent research papers about neighbor discovery protocols with pure directional antennas, we categorize them into two classes, deterministic and random protocols.In deterministic protocols [1][2][3], nodes steer their directional antennas in preset sequences.Deterministic protocols can guarantee the bounded neighbor discovery delay.However, most of them need time synchronized which is not practical in some applications.In random protocols [4][5][6][7][8][9][10], nodes randomly select a direction to transmit or receive HELLO messages.Compared to deterministic protocols, the most obvious advantage of probabilistic protocols is that they do not need to be synchronized, in which they are more robust and adaptive to complex scenarios.However, in most probabilistic or random protocols, the topology change of the network is not concerned.Reference [1] focuses on the oblivious neighbor discovery problems and has proposed an oblivious discovery protocol which achieves guaranteed oblivious discovery with order-minimal worst-case discovery delay.However, the topology change is not its concern.References [2][3][4][5][6][7][8][9][10] focus on the MAC protocols to achieve a better neighbor discovery performance.References [2][3][4][5][6][7][8] all need time synchronized to ensure their performance and their directional antennas just simply choose a direction when they run the neighbor discovery process.References [11,12] take the topology into consideration.Reference [11] works on topology control problems.In this work, nodes track only a subset of their discovered neighbor nodes and it proposes a scheme called Di-ATC which tries to minimize the variance of the angular separation between the tracked neighbors of a node.In [12], nodes discover their neighbor nodes in one sector with increasing power so as to identify the closest nodes in the sector.However, [11] assumes that the network is connected at all times and the path stretch after topology control is low and [12] is only suitable for a static ad hoc network which is not realistic in VANETs with directional antennas equipped. Routing Protocols. Since there are no complete paths in intermittent vehicle ad hoc networks with directional antennas, traditional end-to-end routing protocols for mobile ad hoc network (e.g., DSR, AODV) are not suitable anymore.The existing routing protocols that are concerned with intermittent networks mostly focus on the load easement while the successful transmission is guaranteed.In [13,14], nodes only generate one copy of the data packets to reduce the load of network.In methods [15][16][17][18], nodes selectively send the copies of packets to their neighbor nodes.In these replicationbased methods, nodes carefully make the routing decisions according to several rules such as utility-based routing [15] or probability-based routing [17].They assume that movement patterns of node are not purely random and that future contacts depend on the past information.Reference [16] proposes a method called Spray and Focus which refers to the fact that nodes firstly spray a fixed number of their data packets to the selected relay nodes and then nodes with only one copy of the packet can only forward this message further using a single-copy utility-based scheme.In [18], they investigate the TTL on the copy of packets in order to reduce the load of the networks.Due to the dynamic topology of VANETs, the size of HELLO packets and the mobility should be taken into consideration for the routing decisions.Reference [19] investigates the MAC layer contention issue that it proposes BBNC, a backbone based routing protocol with interflow Network Coding.The position information of each backbone node is critical when nodes in [19] make their routing decisions.Reference [20] focuses on the effect of packet size on the performance of routing protocols; it uses a fuzzy logic-based algorithm to select relay nodes and uses a Q-learning based approach to tune the fuzzy membership functions.For the VANETs where nodes are equipped with directional antennas, the dynamic topology is mostly caused by the scan pattern of the directional antennas, and this is the emphasis of this paper.For the routing protocols which are focusing on the directional antenna, such as in [21], nodes make their routing decisions based on the information about the geographical position of the participating nodes, which needs the use of GPS.Reference [22] focuses on analyzing the relationship between the throughput and the beam width of the directional antenna.In general, they rarely consider the switch pattern of the directional antenna. System Model 3.1.1.Directional Antenna Model.We assume that the set of nodes in the network is denoted by Ň.Each node ∈ Ň is equipped with only one set of directional antennas.For node , we approximate the directional antenna as a circular sector of angle (0 < ≤ 2), and there are ≜ 2/ sectors around it, indexed clockwise from 0 to − 1.When = 2, the directional antenna degenerates to an omnidirectional antenna.The set of sectors of node i is denoted by Ŝ ; ∈ Ŝ maintains a timer .When expires, the rotation mode of the directional antenna is decided by the status of the node: (i) Pure neighbor discovery mode (PND-M): if there is no transmission task for the node, then it switches its directional antenna randomly to the next sector. (ii) Data transmission mode (DT-M): if there are data packets conserved in the node and waiting to be transmitted, then the node begins to switch its directional antenna clockwise.In this mode, the scan pattern of each node is ordered and predictable so that the routing decisions are more precise. The initial is equal for every sector and the rotation period is a constant number, T, where = ∑ ∈ Ŝ .The cover range of each node is R. Communication Model. We assume that the time unit is , which refers to the maximal time that a HELLO message needs to be broadcasted to its neighbor successfully.As shown in Figure 1, when node i points its directional antenna to sector , it broadcasts a HELLO message and then it enters the receiving state until it switches its direction antenna to the next sector.When node i receives a HELLO message from node j, then node j is a discovered neighbor node for node i.Only when two nodes point their directional antennas to the opposite direction and they are located in the transmission range of each other, the HELLO message can be received successfully.Moreover, when a node receives a HELLO message from another node, it will send back a response message conditionally.This mechanism is detailed in Section 4. Mobility Model. In this paper, groups of nodes are scattered randomly in a square area at first, and a community is formed around the central position of each group.Then, nodes begin to walk inside their own community randomly.We assume that a node can walk beyond its current community to another one with a very small probability, P. The scenario is shown in Figure 2. Problem Formulation. As shown in Figure 2, the relative positions among the vehicle nodes are stable since they show strong location and group preference.For example, for node 0, a great number of nodes walk on its north side and its south side, in which case it is unnecessary for node 0 to point its directional antenna to the east side and west side for a long time if it wants to discover more neighbor nodes in a short time.Moreover, as we mentioned, nodes can change their communities so that the scan pattern of node 0 can be out of date sometimes, in which case node 0 needs to adjust the scan pattern of its directional antenna to the updated topology around itself. In order to successfully deliver the packets in an intermittent network, the source node has to send plenty of data packet copies to the neighbors that it meets.However, it can cause heavy load of the network.Otherwise, if the source node simply chooses one of its neighbor nodes as the relay node, it cannot guarantee the successful delivery to the destination and leads to long delay.Therefore, the routing decision determines the transmission delay and the load and the energy consumption of the network.For example, for node 8 in Figure 2, nodes 1, 3, 6, and 7 are the neighbor nodes maintained in its neighbor node list.When node 8 wants to send data packets to node 2, it needs to determine which neighbor nodes should be chosen to relay these packets.The answers of three questions about the neighbor nodes are critical when node 8 makes its routing decisions. For a specific possible relay node: (1) What is the probability that node 8 can communicate with it? (2) If node 8 sends its data packets to this relay node, what is the probability that it can relay these data packets to node 2 in one hop? (3) What is the expected delay that it can successfully transmit these data packets to node 2? Here, we can formulate the research objective of this paper: To make the vehicle be able to adjust the scanning pattern of its directional antenna according to the topology around it and find out the optimal routing decisions to minimize the delay and the number of packet copies while successful packets delivery and short delay are guaranteed. Neighbor Discovery In this section, we devise a novel neighbor discovery algorithm with directional antennas which can make the nodes respond quickly to the topology changes.A priori knowledge is unnecessary when the nodes begin to detect their neighbor nodes. To clearly demonstrate the protocol we devised, we assume that node j is within the cover range of node i and node k is within the cover range of node j while out of the cover range of node i. Neighbor Node List. Neighbor node list is the fundamental part of our approach; sector distribution and routing decision are both based upon the information from the neighbor node list. In our approach, nodes arrange their sectors according to the information in the neighbor node list.The neighbor node list of each node is built up and updated based on the interactive messages that it receives each time.The structure of the interactive message is shown in Figure 3: (1) ID: the ID of node i; (2) Type: to indicate that the message is either a HELLO message or a response message; (3) Sector S: to record the sector that the antenna of node points to when node broadcasts this message; it is denoted by ( ∈ Ŝ ); (4) TNS (time to next sector): to set the timer when it broadcasts this message. The Neighbor List records all the neighbor nodes that node i has ever detected: (1) Neighbor ID: to indicate the ID of the neighbor node, such as the ID of node j; (2) DES N (duration for every sector of the neighbor node): to record all the sectors where node i has ever detected node j; the total number of messages received from node j is denoted by Ŝ ; (3) Sector N (sector of the neighbor node for the last communication): to record the sector that the directional antenna of node i points to when node i detected node j last time, which is denoted by . Every node maintains a neighbor node list.The format of each item in the neighbor node list is shown in Figure 4.The Neighbor-Information Area of node i records the information about node j.The Two-Hop Node List records all the information of the neighbor nodes that node j has ever detected, such as the information about node k.Their formats are the same as the HELLO message and their descriptions are similar as well, except that the Time Flag indicates the update time of this item, which is updated based on the timer kept in node i since nodes are not time synchronized.It is denoted by Flag .In order to improve the efficiency of neighbor discovery, nodes will send back a response message when they receive a HELLO message so that they can take every chance to make themselves discovered by others.However, collision issue is not concerned in this paper since nodes are not time synchronized like in [2][3][4][5][6][7][8].We propose a mechanism to reduce the collision probability, which is that nodes are not forced to send back a response message every time when it receives a HELLO message.When a node, say j, receives a HELLO message from another node, say i, there are two steps to be taken: (1) Node j updates its neighbor node list according to the HELLO message from node i.The Neighbor-Information Area is updated according to the Self-Information Area of the HELLO message and the Two-Hop Node List is updated according to the Neighbor List of the HELLO message.(2) If node i is listed in the neighbor node list of node j and the information about itself is the latest, node j does not respond to node i.If not, node j sends back a response packet to node i after a random time period whether it is in the transmitting state or not. is supposed to be smaller than the TNS in the HELLO message from node i in case that node i turns its directional antenna to the next sector and it cannot receive the response message from node j. As HELLO messages and response messages being broadcasted in the network, nodes are able to collect plenty of information to arrange their directional antennas. Sector Steering Schedule Arrangement. In this part, we present how nodes adjust the steering time of their sectors based upon the neighbor node list. As we mentioned in Section 3, when the mobility model of the nodes in the network shows strong location and group preference, nodes can waste time on the sectors where few neighbor nodes exist if they switch their directional antennas randomly.To make the nodes sensitive to the topology around them and arrange their directional antennas intelligently, we propose an algorithm called History-Based Sector Distribution (HSD).At the beginning of HSD, node i switches its directional antenna to sectors randomly and the timer ( ∈ Ŝ ) is equal to T i /2.As the neighbor discovery proceeds, node i will gather enough information to arrange its directional antennas and ( ∈ Ŝ ) is going to be adjusted.According to the DES N in the neighbor node list maintained in node i, we can know which sectors communicate with other nodes frequently.They are the areas where groups of nodes may exist and are more likely for node i to discover new neighbor nodes. Then, we can formulate the HSD as follows.When node i points its directional antenna to , we can get that where In the preliminary stage, and in (2) are both very small and the sector which is the first to discover neighbor nodes tends to have a bigger since and can be all generated by this sector, in which case the node possibly points its antenna to this sector for an inappropriate which is close to .With the process of neighbor discovery, according to (1) and ( 2), of each sector would be too stable to make a response to the topology changes.For example, of the sectors where few neighbor nodes are discovered tend to be small or even close to zero.However, nodes can change their communities with a small probability as we mentioned in Section 3.Moreover, the relative positions among nodes can change dramatically sometime even if they walk in the same community, such as when nodes walk to an opposite direction.Therefore, neighbor nodes can appear in the sectors which used to be empty and the previous of these sectors would be obsolete. To make (2) sensitive to the topology of the network, in HSD, we propose a mechanism to estimate the topology changes by monitoring the frequency of message receiving. Firstly, in a specific monitoring time period, , then, we can get where indicates the message receiving frequency of in the recent time period of .Obviously, is a stable value when the relative positions among the nodes are stable.In that case, the instant receiving frequency, , is stable as well and it holds that According to (1) and (3), we notice that Therefore, / is only determined by the status of node i but not the status of .Due to this, we define ≜ / Wireless Communications and Mobile Computing which is the message receiving frequency of node i regardless of the status of its sectors. According to (4) and ( 5), when the topology in the network is stable, the following is established between every two sectors of node i: Therefore, we can compare among Ŝ to estimate the topology variation around node based on (6).The functioning process of ( 6) is analyzed as follows. In the preliminary stage of the neighbor discovery process, according to (6), the adjustment of would be started after the first monitoring time period.And node i is supposed to scan more than two sectors during the first monitoring time period to apply (6).Hence, the initial would be restricted within a reasonable range due to the comparison of among sectors.After that, when the topology around the node i changes, there are two situations: (i) The relative positions among nodes are changed while the neighbor nodes are still in the transmission range of node i; that is, the number of neighbor nodes within the transmission range of node i remains the same; (ii) Some neighbor nodes are no longer in the transmission range of node i or new neighbor nodes come into the transmission range; that is, the number of neighbor nodes within the transmission range of node i is changed. The simulation result of [1] shows that in some cases nodes cannot discover each other even though they are within the transmission range of each other due to the equipment of directional antenna, while, in general, more neighbor nodes within the transmission range of node i can lead to more frequent communications.Thus, we assume that is only determined by the number of nodes within the transmission range of node i. For the first situation, for example, some nodes move from sector to sector .Obviously, will decrease while will increase.Then we can readjust and according to (6) since is stable. For the second situation, is going to change as well, and we need to rewrite (6) to the following form: When the second situation happens, to clearly figure out the variation of , we separately analyze the variation of and as follows: then, Simplifying ( 9) and letting yield According to (11), when new neighbor nodes come into the transmission range of node i such as appearing in sector , it is obvious that would be bigger than ( ) before so that Δ > Δ .According to (7), would increase.Contrarily, when nodes flee from , would be smaller than ( ) before so that would decrease.By far we can conclude that (6) works well for dynamic topology. Routing Protocol In the network where nodes are equipped with directional antennas, there is no consistent path from the source node to the destination node.In this section, we devise a special routing protocol called HSD-R which is formulated based upon HSD.The routing decisions are made based on the knowledge maintained in the neighbor node list.Processing.We take Figure 5 as an example to demonstrate how we analyze the neighbor node table when there are data packets waiting to be transmitted. Neighbor Node Table As illustrated in Figure 5, when node i plans to transmit its data packets to node k, it can send its packets to nodes j 1 and j 3 immediately.But node j 3 is not qualified to be a relay node since node k is beyond its transmission distance.However, for node j 1 , we need to take a further consideration because the connecting time between it and node i could be too short to finish a complete data transmission when they point their directional antennas to each other.Moreover, when data packets arrive at node j 1 , the time consumption for node j 1 and node k to point their directional antennas to a proper direction also needs to be considered since the data packets are conserved in node j 1 during that time and cannot be transmitted. In order to make an appropriate routing decision, we need to find out which nodes can relay the data packets to the destination successfully with a high probability.When node i wants to communicate with node j through sector , the communicating probability can be expressed as follows: where refers to the communication probability between node i and node j through .Furthermore, we need to make sure that is the best option for node i to communicate with node j.For example, in the most recent monitoring time period, node i and node j have a frequent communication in but this kind of situation could be a result of that node i and node j just happen to detect each in the recent M while they seldom encountered with each other before.In other words, the neighbor relationship between node i and node j existing in is only during the most recent monitoring time period.For this kind of neighbor nodes, they are not good options for relaying data packets due to the weak neighbor relationship between them and the source node.Therefore, we introduce a coefficient, , which refers to the stability of neighborhood relationship between node i and node j.It can be calculated as follows: Before analyzing the time metric of every possible path, we need to detail how the directional antennas operate during the data transmission.As we mentioned in Section 3, nodes are supposed to switch their directional antennas in a predicable sequence in order to make the routing decisions more precise, while the source node can begin its data transmission at any time point and nodes are not time synchronized in this paper.Therefore, we propose a mechanism to make nodes to change their rotation pattern spontaneously.When there are data packets conserved in the nodes and waiting to be transmitted, data packets could be generated by the nodes themselves or received from other neighbors, these nodes switch their rotation mode to DT-M.Otherwise, they still keep their scan pattern as PTN-M.The broadcast of interactive messages is still going on whether the rotation mode is DT-M or PTN-M since the topology changes can also happen during the data transmissions. Considering that nodes make their routing decisions based on the status of nodes in two-hop distance, there are at least three sets of directional antennas participating in data transmission.According to the neighbor node list maintained in the source node i, we can construct the relationships about sectors among the source node, the neighbor nodes, and the two-hop nodes.As illuminated in Figure 6, we can draw the position and length of the frame for Source Node i according to the Sector Distribution in neighbor node list; the position and length of the frames for Relay Node j and Two-Hop Node k are separately decided by the Neighbor-Information Area and the Two-Hop Node List.In Figure 6, the zero point of the time axis, 0 , refers to the time point when the source node prepares to send its data packets.The green zone between point A and point B or between point C and point D refers to the time period when the directional antennas of two nodes cover each other; that is, the two nodes can communicate with each other during the green zone. When the relay node receives data packets successfully, there are two possibilities: (1) the relay node waits for a period of time until it can communicate with node k, such as shown in Figure 6(a); (2) the relay node sends them to node k immediately, which means = , as shown in Figure 6(b). Source node, like node i, can send data packets at any time.In other words, the position of point O can be anywhere within the duration of , while, in general, it can be categorized into two cases: (1) it is ahead of the able-transmitting period, like point O in Figure 6(a); (2) it is during the abletransmitting period, like point in Figure 6(a) and point O in Figure 6(b).If the node misses the two kinds of time positions, like point in Figure 6(b), it has to wait for the next communicating sector and it can be considered as point O again. From Figure 6 we can see that, in order to figure out the relationship among the three sets of directional antennas, it is necessary to figure out the beginning time points, begin (, ∈ {, , }; ̸ = ), and the ending time points, end (, ∈ {, , }; ̸ = ), for these sectors: (1) : (2) : (3) : Then we present an algorithm to predict the waiting and transmitting time for every possible path. After the waiting time and able-transmitting time of each path are calculated according to Algorithm 1, the source node is able to make its routing decisions. Routing Decisions. When the source node, say node i, wants to send its data packets to the destination node, say node k, we assume that the set of the neighbor nodes discovered by node i is denoted by and there are ( ∈ ) twohop nodes in the Two-Hop Node List of each neighbor node. Therefore, the number of potential two-hop paths for the node i is ∑ ∈ .And there are four possibilities when node i goes through its neighbor node table : (a) Node k is only recorded in the Neighbor-Information Area. (b) Node k is only recorded in the Two-Hop Node List of some neighbor nodes. (c) Node k is recorded not only in the Neighbor-Information Area but also in the Two-Hop Node Lists.In other words, node i can send its data packets to node k directly or via the relay nodes. (d) There is no record for node k in the neighbor node list. For the first three possibilities, every possible path should be taken into a further consideration because direct paths may have less connecting time or lower connecting ratio.The set of paths that can reach the destination node is denoted by Ũ. From Algorithm 1 we can see that the time consumption for a complete transmission only consists of the abletransmitting time and the waiting time.For a specific path from the source node to the destination node, obliviously, the decisive part for a successful delivery is the weakest link among the paths, which refers to the most unstable neighbor relationship between two nodes.The calculation results of Algorithm 1 are theoretical time based upon the most recently updated history records.To make it more suitable for a general situation, we propose an inequation to filter every path in Ũ: where ℎ trans1 and ℎ trans2 indicate the predicted time period for delivery of each hop and H indicates the sum of transmission delay and propagation delay for one hop.In this paper, the propagation delay between the nodes is assumed to be small and the size of data packets is the same; therefore, we hypothesize that H is the same for every node.Equation (17) indicates that the minimum of the able-transmitting time period of an ideal path should be enough to complete one packet transmission. In order to reduce the load of the network, we set a TTL, @, for every data packet.Based upon the previous discussion, we can conclude that the TTL for the data packets in a specific path should be the sum of H and the waiting time in the path.To be more tolerant of the complex environment condition, we set @ as follows: Algorithm 2 details the procedure for the path selection.The set of qualified paths is denoted by Ṽ. After the path selection process, the source node generates v copies of its data packets and transmits them through the first v paths in Ṽ. V is a preset constant number which is detailed in Section 6.Then, we discuss the fourth situation that there is no record for node k in the neighbor node list.Apparently, a node that has encountered more neighbor nodes may get more topology information in its two-hop distance.Hence, when this kind of extreme situation occurs, node i sends its data packets to all the neighbor nodes who have discovered more neighbor nodes than node i dose; that is, node i sends its data packets to node j if j is bigger than the size of .If there is still no qualified neighbor node after that, then we choose those nodes who have encountered different neighbor nodes compared to .Algorithm 3 describes this procedure.The size of is denoted by x, and the set of qualified neighbor nodes is denoted by Ñ. Simulation Results In this section, we evaluate the performance of HSD and HSD-R separately.We use a simulation setup that consists of a uniform distribution of nodes over a 2-dimensional plane of area 20 km × 20 km.The fixed transmission range is = 10 km and the angle of each directional antenna is /3, that is, 6 sectors for each vehicle node.The rotation period of the directional antennas is = 180 and the duration of each receiving state is = 9 which indicates that each node can broadcast HELLO messages 18 times in a circle totally.The monitoring time = 360.The moving speed of the vehicle nodes is 50 km/h which is nearly 15 meters every 180 with a random direction. We vary the total number of nodes inside this area with 5, 10, and 50, and they represent three levels of node density which are extremely low, low, and high, respectively.We assume that the area of the community is 4 km × 4 km, and the number of nodes in one community is random which is from 2 to 5 and = 0.1.In the extremely low density scenario, we artificially divide the 5 nodes into two groups with 2 and 3 separately.We run the simulation 20 times and the simulation duration is 10 4 .Nodes are randomly scattered inside the area every time.There is a chance that nodes cannot discover all other nodes because their directional antennas cannot cover the whole area. Evaluation of HSD. Firstly, we compare and analyze the performance of HSD and the random algorithm.When the nodes switch their sectors clockwise with an equal and fixed , it turns out that their average time consumption is much larger than HSD and the random algorithm.Therefore, we only concentrate on the simulation comparison of HSD and the random algorithm.Figure 7 shows the performances of the two algorithms on every node to successfully discover a certain percentage of neighbor nodes inside the area. In Figure 7(c), HSD and the random algorithm show no obvious difference on time consumption when the discovered nodes are under 70% since, in the high density scenario, there are plenty of neighbor nodes in any sectors so that the scan pattern of HSD is similar to the random algorithm.When the discovered nodes reach 70%, the performance of HSD is worse.Because higher density of nodes leads to more frequent communication among nodes, the nodes especially which are located at the central part of the scenario can go through turbulence on the schedule of sectors.However, as shown in Figures 7(a) and 7(b), the HSD achieves better performance since the probability of communication is smaller and the change of topology is easier to be detected when the density of nodes is low.Especially in Figure 7(a), when nodes in one community have discovered a node in another community, they can point their directional antennas to the certain sector where another community locates for a long time since there are few interferences from other sectors.Hence, the average time consumption and the standard deviation of time consumption of HSD are both smaller than those of the random algorithm.According to [20], the packet size has obvious influence on the performance of protocols.However, the interactive action among nodes is rare since the node density in this paper is quite low; for example, only one neighbor node exists within a sector for the low density scenario. In summary, simulation results demonstrate that HSD achieves better performance than the random algorithm on neighbor discovery when nodes are sparsely scattered. Evaluation of HSD-R. From the simulation results of HSD, we conclude that HSD achieves better performance when nodes are sparsely scattered.Reference [16] proposes a routing protocol called Spray and Focus.Though Spray and Focus is not designed for the networks where directional antennas are equipped, it is a classic routing protocols for opportunistic networks and the most important is that nodes in [16] show location and group preference as well.Therefore, we compare and analyze the performance of HSD-R and Spray and Focus in low density scenario and high density scenario separately.Spray and Focus consists of two phases: in the first phase it distributes a fixed number of copies to the first few relays encountered, and in the second phase each relay can forward its copy to a potentially more appropriate relay, using a carefully designed utility-based scheme. According to [16], the number of packet copies equal to about 5-10% of the total nodes serves as a useful rule of thumb for good performance.Because nodes in [16] are not equipped with the directional antennas, the connecting probability among nodes in our scenario is much lower than that in [16].Under this consideration, we increase the number of packets copies, that is, the number of selected paths, V, gradually from approximately 5% to 20% of the total nodes to compare their performance.The beginning time of data transmission for every node is random and the destination for every source node is selected randomly as well. When the node density is low, only two paths can be selected from Ṽ and around ten paths for high density scenario.As illuminated in Figure 8, the average able-transmitting time, ℎ trans , among nodes in low density scenario is bigger since nodes mostly point their directional antennas to certain sectors where another community locates for a long time and there are few interferences from other sectors. Figure 9 depicts the performance of the two protocols in low density scenario.Compared to Spray and Focus, the transmission delay of HSD-R is much lower and it shows no obvious differences for every node.Figure 9 indicates that the selected paths in HSD-R do reduce the delay.The successful transmission rates of the two protocols in low density scenario are all above 90%.When the node density gets higher, in Figure 10, we can see an obvious decrease on ℎ trans when the number of copies is bigger than 6, that is, around 10% of the total nodes.It indicates that the average able-transmitting time of the first 6 paths is much bigger than the left paths.Figure 11 proves that the minimum number of copies for HSD-R to achieve a reasonable success rate and delay is around 10% as well.Moreover, we can see that the success rate of Spray and Focus decreases dramatically when the number of copies is small while the transmission delay of HSD-R is slightly changed.And the minimum of number copies for HSD-R to achieve a reasonable performance is smaller than Spray and Focus.What needs to be explained is that the obvious success rate difference between HSD-R and Spray and Focus is caused by small number of total nodes in the scenario, and few data packets failing to be delivered to the destination can lead to a great decrease in success rate.We do not have the simulation on extremely high density scenarios because the directional antenna is not designed for that kind of scenarios.In summary, compared to Spray and Focus, HSD-R can guarantee a successful delivery with a smaller number of packet copies while directional antennas are equipped. Conclusion In this paper, we concentrate on the networking of vehicles.Firstly, we have proposed a novel neighbor discovery algorithm called History-Based Sector Distribution (HSD) for vehicle ad hoc networks where directional antenna is equipped.The proposed HSD can make the vehicles be able to arrange their directional antennas according to the topology of the network.We have proved that our algorithm can achieve better performances than the traditional random algorithm especially in the scenario where vehicle nodes are sparsely scattered.Secondly, we design a routing protocol called HSD-R which is derived from HSD.In HSD-R, vehicle nodes make their routing decisions by analyzing the link quality of each possible path between the source node and the destination node based on the information collected during the neighbor discovery process.The evaluation results of HSD-R prove that high successful transmission rate and low transmission delay are guaranteed with a small number of packet copies. Figure 3 : Figure 3: Structure of interactive message. Figure 7 : Figure 7: Simulation result comparison on different node densities. Figure 8 : Figure 8: Average able-transmitting time in low density scenario and high density scenario with V equal to 25% of the total nodes. Figure 11 : Figure 11: Success rate and average delay in high density scenario with different V. represents the proportion of interactive messages received through compared to the total messages received through all sectors of node i and represents the set of neighbor nodes recorded in the neighbor node list of node i.We define ≜ ∑ ∈ and ≜ ∑ ∈ Ŝ ∑ ∈ .The relationship between and is expressed as follows: , of the neighbor node table do Calculate the beginning and ending time points for , (Eq.(14)-(15)).foreveryrow of the Two-hop Node List do Calculate the beginning and ending time points for (Eq.(16)).if = , waiting as the waiting time for the first hop of path h Identify ℎ waiting as the waiting time for the second hop of path ℎ if < max{ Sort Ṽ in descending order according to ℎ Algorithm 2: Path selection. Input:
9,559.2
2018-04-10T00:00:00.000
[ "Computer Science" ]
Preconditioned Conjugate Gradient Acceleration on FPGA-Based Platforms : Reconfigurable computing can significantly improve the performance and energy efficiency of many applications. However, FPGA-based chips are evolving rapidly, increasing the difficulty of evaluating the impact of new capabilities such as HBM and high-speed links. In this paper, a real-world application was implemented on different FPGAs in order to better understand the new capabilities of modern FPGAs and how new FPGA technology improves performance and scalability. The aforementioned application was the preconditioned conjugate gradient (PCG) method that is utilized in underground analysis. The implementation was done on four different FPGAs, including an MPSoC, taking into account each platform’s characteristics. The results show that today’s FPGA-based chips offer eight times better performance on a memory-bound problem than 5-year-old FPGAs, as they incorporate HBM and can operate at higher clock frequencies. Introduction Accelerators are devices that can provide very high performance and efficiency when executing certain applications. To this end, for certain high-performance computing (HPC) applications, field-programmable gate arrays (FPGAs) can significantly outperform GPUs, which in turn significantly outperform CPUs [1,2]. Therefore, it is highly desirable to optimize HPC applications in order to take advantage, as much as possible, of such reconfigurable accelerators. However, FPGAs are considered difficult to be programmed, interconnected and handled, especially within parallel systems [3]. Moreover, the heterogeneity introduced by these accelerators makes efficient management of resources and intercommunication between different devices much more complex compared to conventional homogeneous HPC systems. Due to challenges in resource management and difficulty in programming, most HPC systems have at most a limited number of nodes with FPGAs as accelerators, while several do not have any accelerators at all. For instance, no HPC system in the Top 500 [4], which ranks the most powerful computer systems in the world, utilizes FPGAs. On the other hand, "architectural specialisation with FPGAs or even Application-Specific Integrated Circuits (ASICs) could be important to overcoming the bottleneck introduced by the slowdown of Moore's Law" [5]. Since no more than ten years ago, GPUs were in a similar situation, as they were available only in experimental clusters and testbeds, while now they dominate the Green 500, it is highly likely that we will see the same thing with FPGAs at some point as well. Moreover, as clearly demonstrated in a very recent analysis of the energy efficiency of the Top 500 systems over the last almost 20 years, we should find new architectures and utilize accelerators if we want to keep providing growing computing power in HPC systems at the current rate [6]. Historically, initial supercomputing efforts focused on stronger processing capabilities; however, in the 1970s, parallelism came into the picture and significantly boosted the performance capabilities of such systems. HPC systems rely on highly effective architectures combined with powerful processing elements. These processing elements have also varied, and the main representatives are the CPU, general-purpose GPU (GPGPU), field-programmable gate array (FPGA), and dedicated hardware, i.e., ASICs. Subsequently, high-performance computing (HPC) has found itself at the heart of many modern technological challenges and milestones. It has been the unavoidable outcome of the evolution towards contemporary scientific analysis and study, and it relates to a number of crucial fields, e.g., aerodynamics, quantum mechanics, oil and gas exploration, and many others, that have a significant socio-economic impact. HPC is primarily associated with highly computationally intensive applications, and the metric commonly used for quantifying HPC platforms is the number of floating-point operations that a system can achieve over a finite amount of time, usually one second. Furthermore, in order to sustain the ever-increasing demand for storing, transferring, and processing data, HPC servers need to significantly improve their efficiency. Scaling the number of cores alone is no longer a feasible solution due to increasing utility costs and power consumption limitations. Furthermore, while current HPC systems can offer petaFLOP performance, their architecture limits their capabilities in terms of scalability and energy consumption. Extrapolating from top modern HPC systems, such as China's Tianhe-2 Supercomputer, it is estimated that sustaining exaFLOP performance, which at the moment constitutes the next HPC milestone, requires a highly significant 1 GW of power. Similar, albeit smaller, figures are obtained by extrapolating even the best system of the Green 500 list as an initial reference. Thus, the range of technological approaches that can serve computationally intensive applications in the context of HPC systems has been wide, and modern HPC computing systems have managed to achieve performance in the order of petaFLOPs; that is, 10 15 floating-point operations per second. Example top systems related to this performance metric are Summit [7] in the United States with a score of 1223 petaFLOPS and the Sunway TaihuLight [8] in China that achieves 93 petaFLOPS. The work outlined here fits in the context of heterogeneous HPC systems that use FPGAs at the very core of their architecture. Nevertheless, depending on the utilized accelerator(s) or the parallelization level that should be achieved, it is necessary to use different languages and specifications (e.g., CUDA, OpenMP, OpenCL, MPI, or GA). This variety of programming paradigms increases significantly the complexity of the software development, especially since developers also try to get the maximum performance from the underlying accelerator hardware. This, however, requires certain skills and/or exposure to high-level synthesis (HLS) tools. All the above make the development of HPC applications through the utilization of tailor-made hardware accelerators a forbiddingly costly and complex process, especially for the SME sector, which is, in general, very cost-conscious. This work was funded by the OPTIMA project [9], an SME-driven project that will allow participating industries-coming from different domains, as well as applications developed by academics and used by industry-to take advantage of the new upcoming and promising FPGA-based HPC systems. Towards this aim, OPTIMA will utilize: (a) novel FPGA-based HPC platforms; (b) several HPC programming environments; and finally, (c) the skills needed to promote HPC applications to take full advantage of the underlying heterogeneous HPC systems. This paper presents a small subset of the work invested towards OPTIMA's goals, along with the results that have been obtained during this effort. Specifically, it describes in detail the (i) implementation of acceleration modules pertaining to OPTIMA's algorithms of interest on four different FPGAs, offering comparison-based observations; and (ii) implementation of the preconditioned conjugate gradient (PCG) method taking into account the properties of each FPGA. The rest of the paper is structured as follows: Section 2 offers an account of the PCGrelated work that exists so far, while Section 3 presents the particular algebra kernels used in this work for solving PCG. Subsequently, Section 4 offers a detailed account of the different hardware platforms that have been used as hosts for PCG accelerator modules, while Section 5 provides information as to how these modules have been implemented on different hardware platforms. Finally, Section 6 presents evaluation information, while Section 7 concludes the paper. Related Work The Preconditioned Conjugate Gradient (PCG) is an iterative method utilized to solve systems of linear equations and is used in many scientific fields. Several works have been implemented on FPGAs to accelerate the processing of PCG. Specifically, in [10], Debnath et al. present a comparative analysis of multiple implementations of the conjugate gradient method on various platforms suitable for high performance computing, such as FPGAs and GPUs. They conclude that FPGAs and GPUs are much more efficient than CPUs in calculating the conjugate gradient. Moreover, in [11], Guiming Wu et al. present an implementation of the conjugate gradient method based on a high-throughput sparse matrix-vector multiplication design that does not require zero padding. Using a Virtex5 FPGA, they achieved acceleration of 3.6× to 9.2× relative to software. Similarly, in [12], Jing Hu et al. present a PCG solver of 3D tetrahedral finite elements using an FPGA as an implementation platform. In their work, they chose the algorithm formulation that best suited the FPGA characteristics. As a result, using a Virtex4 FPGA, they managed to achieve a speedup of 40× against the software implementation. Additionally, a work that targets the optimization of sparse matrix vector multiplication in PCGs is presented in [13] by Grigaoras et al. Here, they optimise the implementation architecture for block diagonal sparse matrices, helping them achieve a 3× speedup over a multi-threaded software implementation while being 10 times more efficient in BRAM utilization than other state-of-the-art FPGA implementations. In [14], Dubois et al. present a design that implements the whole conjugate gradient algorithm instead of small accelerators. Their work outperforms a software implementation once the rank of the matrices exceeds 46,656, and it is capable of handling matrices with rank up to 116,394. One more work by Dobois et al. that mainly focuses on sparse matrix vector multiplication is presented in [15]. They investigate two different implementations, one that targets peak performance and another that balances performance with available bandwidth. Both implementations provide the same performance, with the second being more efficient in power consumption. Their implementations provide slightly lower performance than the CPU implementations while having a 30× slower clock frequency. The "Chronos" Preconditioned Conjugate Gradient Simulation software is very common in industrial applications, both because taking direct measurements is very expensive and often unfeasible, and because there is usually interest in simulating past events or forecasting future ones. The increasing demand for accurate and reliable numerical simulations results in the use of very large computational grids. The size of problems can easily reach several hundreds or even thousands of millions of unknowns, and exploitation of high-performance computing (HPC) infrastructures becomes a necessity. In several large-scale simulations, the solution of linear systems of equations is the most time-consuming task, often taking more than 90% of the total computational time [16]. In this context, one of the main goals of the OPTIMA project is the acceleration of Chronos [17], a proprietary collection of sparse linear algebra kernels for solving huge linear systems specifically designed for HPC. Chronos is mainly written in C++ with a strongly object-oriented design. Interprocessor communication is handled by CPUs through MPI directives, while fine-grained parallelism is enhanced by multithread computation through OpenMP and GPU accelerators. The modular design of Chronos allows the development of accelerated versions of the innermost kernels while keeping the framework for MPI communication between nodes unchanged. In this work, we present an accelerated version of the basic kernels required for the preconditioned conjugate gradient (PCG) in Chronos. The PCG is an iterative method for the solution of linear systems in the form: where A is a symmetric positive definite matrix, b is a given right-hand side vector (i.e., a given column vector), and x is the unknown solution. Iterative methods construct, from an arbitrary solution x 0 , a succession of vectors (x 0 , x 1 , x 2 , . . . , x k ) converging to the exact solution. Iterations are stopped when x k should be sufficiently close to x, i.e., when the relative residual is small enough: with tol a given tolerance. The PCG algorithm used to compute the succession (x 0 , x 1 , x 2 , . . . , x k ) is summarized in Algorithm 1 [18]. Algorithm 1 Preconditioned conjugate gradient-theory. 1: Choose x 0 and M −1 2: The convergence rate depends strictly on the matrix M, called the preconditioner, but this is beyond the scope of this paper, as we focus only on the acceleration of each PCG iteration; thus, we adopt Jacobi preconditioning, where M = diag(A). The core of the PCG scheme is a collection of the following operations: apply: preconditioner application; and can be rewritten as in Algorithm 2. At this stage of the OPTIMA project, we focused on the shared memory version of the Chronos-PCG, while the MPI implementation will be addressed in the next development steps once the hardware kernels are further optimized. Hardware Platforms The aforementioned algorithm was implemented and evaluated on four different FPGA platforms. In this section, we provide a short description of the hardware platforms and FPGAs that were used in this work. Specifically, the Chronos-PCG was implemented Algorithm 2 Preconditioned conjugate gradient-implementation. : end while (i) on an Alveo U50 accelerator card, which was used for the initial development of the application; (ii) on the Xilinx ZU9EG MPSoC of a custom-built cluster that was designed and built by the ExaNeSt project; (iii) on the Alveo U280 accelerator card of the HACC cluster from ETH Zurich, which hosts a mix of Alveo U250, U280, and U55C FPGAs; and finally, (iv) on an Alveo U55C accelerator card hosted in a new prototype platform currently hosting a pair of Alveo U55C cards. The results presented in this work are the outcome of using a single FPGA from each platform. The main goal of this work was to compare the performance and resource utilization results of the FPGAs in order to better understand how much the results have improved as technology and FPGA tools advance. Thus, we used different generations of Xilinx FPGAs, from the 5-year-old Xilinx ZU9EG, the 3-year-old Alveo U50 and U280 acceleration cards, and on to the latest Alveo U55C acceleration card released in Nov. 2021. Alveo U50 Server Initial development of the application was done on a Dell PowerEdge server powered by an Intel Xeon @2.2 GHz and 196 GB of RAM. It hosts an Alveo U50 accelerator card on a PCIe slot. The Alveo U50 is the smallest alternative in terms of resources among the Alveo family and has 8 GB of High Bandwidth Memory (HBM). The HBM of the Alveo U50 offers a peak memory bandwidth of up to 316 GB/s, making it an appropriate option for memory-intensive applications. ExaNeSt Platform Within the ExaNeSt project [19], a novel multi-FPGA HPC platform has been developed. It provides high processing power through a novel scalable hardware architecture tailored to the characteristics and trends of current and future HPC applications, which significantly reduces data traffic, energy consumption, and delays. The ExaNeSt prototype consists of several nodes interconnected in a 3-D torus topology. Specifically, each prototype blade includes four interconnected daughter boards, called quad-FPGA daughter boards (QFDBs) [20], and each QFDB supports four tightly coupled Xilinx Zynq Ultrascale+ ZU9EG MPSoCs as well as 64 Gigabytes of DDR4 memory. QFDB boards have been successfully deployed in other platforms as well, demonstrating efficient interconnection and parallelization of accelerated tasks [21]. The current prototype we used features 256 Zynq MPSoCs and a total DDR memory size of 4 terabytes. The latest development tool by Xilinx, namely Vitis Unified Software Platform [22], greatly automates the process of building accelerators and injecting them into Xilinx FP-GAs. However, this process only works for a specific set of purpose-built Xilinx FPGA boards, such as the Alveo ones. This is done through specific constructs that describe the hardware platform in order for the tool to use it efficiently. To be able to use the Vitis tool for the development of applications targeting our custom ExaNeSt platform, multiple components-both hardware and firmware-had to be modified and/or updated. More specifically, on the hardware side, new custom platform projects had to be created, which could then be incorporated into the Vitis tool as the base platform, on which it can then connect accelerators automatically. As mentioned above, the building block of the ExaNeSt prototype is the QFDB. It incorporates four MPSoCs, with each requiring a different static design for connectivity purposes. Thus, a different custom platform was implemented for each of the four FPGAs. On the software side, the operating system had to be updated in order to support the Xilinx Runtime Library (XRT) [23]. Finally, the Linux kernel was also updated in order to successfully interface the ExaNeSt upgraded custom firmware, which establishes the connectivity between the FPGAs. The current development and update process targets a single chassis that contains four QFDBs, i.e., 16 interconnected FPGAs. ETH HACC Platform At ETH Zurich and in collaboration with Xilinx, a cluster has been created for research and development of architectures and HPC applications with the use of the latest FPGA technology [24]. This cluster consists of fifteen servers, with one dedicated to development and the rest for deployment. Four deployment servers host a mix of Alveo U250 and Alveo U280 accelerator cards, whereas the other 10 were recently added and host a single Alveo U55C card each. The development of the application targeted the Alveo U280 cards, as the 10 servers that host Alveo U55C cards were added to the cluster after our implementation. New Alveo U55C Prototype A new prototype platform was designed and built using reconfigurable hardware. It consists of two servers, each incorporating two Alveo accelerator cards. At the moment, half of the prototype is operational, including a server with an AMD Ryzen processor operating at 3.7 GHz with 256 GBs of RAM. Inside the server reside two Alveo U55C accelerator cards, which provide the same reconfigurable resources, each, as the U280 and double the HBM (16 GB) while hosting no DDR memory. The Alveo U55C is the latest Alveo card and is built for HPC. The two servers are interconnected via a 1 Gbps network, while a 10 GBps network resides on the FPGA side. Summarizing, the main FPGA features with regard to the four platforms described above are included in Table 1. The LUTs (look up tables), DSPs (digital signal processing elements), and BRAMs (block RAMs) are the main components of the reconfigurable logic. The LUTs and DSPs are utilized for implementation of the computational components, while the BRAMs are memory elements within the FPGA's logic for storing data. Notably, the new Alveo acceleration cards support both increased high-bandwidth memory (HBM) and plentiful hardware resources, with the memory bandwidth being very important in HPC. Implementation The Chronos-PCG introduced in Section 3 is accelerated using six hardware kernels, i.e., six separate and individual modules implemented on the reconfigurable fabric of an FPGA. Due to the modularity of the PCG scheme (see Algorithm 2), interfacing between the software and the hardware kernels is straightforward using OpenCL APIs. A setup phase takes place before the start of the iterative scheme, during which the matrix, the right-hand-side, and the Jacobian preconditioner are copied from the host to the device. In addition, a set of scratchpad buffers are allocated on the device so that the scheme can be executed until convergence without additional host-to-device copies and vice versa. As such, processing is performed completely on the FPGA, while the host is responsible for synchronization and execution of the algorithm by calling the hardware kernels. Once convergence is achieved, the result is copied back to the host. A high-level block diagram of the PCG implementation on the FPGA is shown in Figure 1. Each kernel has its own I/O ports connected to the HBM, and a single compute unit (CU) is instantiated for all the kernels except the sparse matrix vector (SpMV) multiplication kernel. The SpMV is the most time-consuming operation of the algorithm, and as a result, multiple CUs of the respective kernel are placed within the FPGA in order to accelerate its execution. The number of CUs that fit in each FPGA is discussed below, and it is mainly restricted by the number of available connections to the memory interface. A detailed description of the hardware kernels and their interfaces is provided in the following section. Hardware Kernels The implemented kernels are the following: • copy: This kernel copies a vector X to a second vector Y, as shown in (3). Table 2 lists all function parameters. Y ← X In order to fully exploit the data width of the HBM channels, copy uses vectorization, in which data are fetched in batches. Memory controllers are capable of fetching 512 bits of sequential data every clock cycle, and their frequency has been set to 450 MHz. The operating frequency of the copy kernel can be more than 250 MHz, and in every clock cycle, memory controllers can read or write batches of 512 bits. Furthermore, the rest of the kernels also utilize some form of I/O vectorization. The copy kernel comprises of two functions operating in parallel using the high-level synthesis (HLS) dataflow directive, whereby data between these functions are passed using the AXIstream protocol. The first function implements the vectorization technique, where batches of elements from the X vector are read. The second function stores the batches of elements to vector Y. (4). Table 3 lists all function parameters. Similar to copy, axpy utilizes two functions operating in parallel and uses the HLS dataflow directive for fetching data and storing results. Moreover, a third function focuses on the computational part of the kernel and on updating Y with the results. • xpay: This kernel performs processing similar to axpy, but now vector Y is multiplied by a constant alpha and then vector X is added to it. Finally, the result is written to the same vector Y, as shown in (5). Table 4 lists all function parameters. Similarly to axpy, it comprises of two functions operating in parallel using the HLS dataflow directive for fetching data and storing results. Further, a third function focuses on the computational part of the kernel and on updating Y with the results. (6). Table 5 lists all function parameters. Again, similarly to previously described kernels, it utilizes two functions operating in parallel using the HLS dataflow directive for fetching data and storing results, while a third function focuses on the computational part of the kernel. (7). Table 6 lists all function parameters. The dot kernel uses three basic HLS directives. Due to vectorization, the unroll directive is used to process 16 float or 8 double precision elements in parallel. A temporary array is also created to store partial results. The kernel uses array_partition primitive to access, in parallel, each cell of the array. Finally, dot uses the pipeline directive in order to initiate an iteration of the loop in every clock cycle. • SpMV: This kernel calculates the matrix-vector product between sparse matrix A and vector x, and the output is stored in vector y, as shown in (8). Table 7 lists all function parameters. The SpMV is a memory-bound algorithm, and its main bottleneck is the randomaccess pattern on the x input vector. Therefore, available memory bandwidth is the main performance factor for this HPC kernel. Further, as it is commonly applied to double precision floating-point data, this implementation focuses on this setup. However, both effective memory bandwidth and double precision arithmetic are factors that make FPGA design challenging. FPGA platforms are commonly bandwidthconstrained compared to CPUs or GPUs, and they also lack dedicated DSPs for double precision operations. Hence, to maximize the utilization of the HBM bandwidth available on our platform, we (i) stream all input and output data apart from the x vector; (ii) apply dataflow operations; and (iii) use multiple compute units (CUs). Each CU operates on a subset of rows of the entire problem (partition), and each partition is placed on a separate HBM bank. Moreover, to cope with the high floating-point operation latency (especially the accumulation on the inner loop that has a carried dependency), we use loop unrolling. Implementation on the Platforms Moving on with the implementation on each platform, it is important to note that each different FPGA has specific characteristics and limitations. Thus, we provide some details pertaining to the implementation on each of the four FPGAs. ExaNeSt The ExaNeSt prototype is the only platform that is based on MPSoCs, with its FPGAs offering the lowest amount of reconfigurable resources. The limited reconfigurability does not pose any performance issue, as the PCG application is memory bound. The main characteristic of the ZU9EG MPSoC that weakens performance compared to the other platforms is the ARM-A53 processor, which is a low-power/low-frequency (1.2 GHz) fourcore processor with low processing capabilities compared to the server processors available on the other platforms. Regarding implementation specifics, the kernels utilize vectorized inputs where possible in order to utilize the available 128-bit data width supported by the FPGA. This means that two double precision values can be delivered to the kernels every clock cycle. Further, inside the MPSoC, we managed to place a maximum of 8 SpMV compute units (CUs). This restriction comes from the three interconnects that connect to the three available data ports (named high-performance or HP ports) of the processing system. Each interconnect offers 16 master ports. The clock frequency was set to 150 MHz, and the whole system utilizes about 45% of the available LUTs, including the static logic required for the ExaNeSt communication infrastructure. It is interesting to note here that there was an issue with the accuracy of the dot kernel. The issue was due to the vectorization and the pipelining/parallelization of operations, as it changes the order of operations. Changing the order of floating-point operations has an effect on the accuracy of the results, which also applies to software execution, as changing the order of operations slightly changes the final result of the lower digits due to rounding. Concerning our application, for certain datasets the results were corrupted. For this reason, we had to remove the parallelization of operations and allow sequential accumulation, which significantly decreases the performance of the kernel. The same issue was encountered on the Alveo U280. On the other hand, for the U50 and U55C Alveo boards, there was no result corruption, but there was a slight difference in accuracy between software and hardware. Alveo U50 The Alveo U50 was the first FPGA we targeted in this work. The initial implementation utilized a single compute unit per kernel, providing each kernel with as many ports as possible in order not to impede traffic from/to the HBM links. The restriction on the number of ports for the U50 FPGA is 30. As such, we reduced the ports of the SpMV kernel to four from five, and the rest of the kernels to two ports each. As a result, we placed 4 SpMV CUs inside the Alveo U50. HACC On the HACC platform, we focused on the Alveo U280 FPGAs, as they include HBM that can provide a peak memory bandwidth of 460 GB/s. Further, at the time of implementation, the 10 servers that host the 10 Alveo U55C FPGAs were not available. As we already have implementations targeting the U55C, we plan to promptly utilize these 10 new servers. The main restriction we had to face while working on the U280 implementation was the number of memory ports of the accelerators towards the HBM links, which for the U280 is 32. The SpMV kernel requires four ports, while the rest of the kernels require two. As a result, we managed to place only 5 SpMV CUs inside the U280. Further, as mentioned above, we encountered the same accuracy issue with the ExaNeSt FPGAs, which led to corrupted results. For this reason, we had to decrease the parallelization of the dot kernel, resulting in a decrease in the overall performance. Alveo U55C The implementation on the U55C was the same as on the U280, as they integrate exactly the same reconfigurable resources while having the same HBM port restrictions, i.e., a maximum of 32. In short, 5 SpMV CUs and 1 CU for each of the rest of the kernels were placed inside the FPGA. The only difference between these two implementations was that on the U55C, the optimized dot kernel could be utilized, as it did not corrupt the output data. Finally, the U55C can support larger datasets, as it offers double the HBM compared to the Alveo U50 and the U280. System Performance and Evaluation With regard to evaluation of the various setups, we focused first on the consumption of resources, and second on the speed of execution. Beginning with the former, Table 8 lists the utilization of resources for each of the different FPGAs selected in the context of this work. Overall, resource utilization remained low, and this was true even for the ZU9EG FPGA, which offers the least resources among our set of FPGAs. As already mentioned, the application is memory bound, and on each FPGA, communication to main memory is limited by the available hardwired memory ports. Consequently, this limits the number of SpMV CUs that can be placed within each FPGA, and considering that an SpMV kernel requires four memory ports in order to achieve optimal performance, it was important to reduce the total number of required memory ports. This was achieved by sharing the FPGA memory ports among different SpMV kernels, which is a feature supported by the Xilinx development tools, and this eventually resulted in using two to three memory ports per SpMV kernel. While, this allowed us to fit 11 or 7 CUs, respectively, the performance results showed an increase in the execution time for both cases, owing to the data congestion that builds up on the shared memory ports. Subsequently, performance measurements in terms of execution times were based on different types of datasets. Specifically, these datasets were a set of matrices obtained from linear elasticity problems. The size and the number of non-zeroes of the matrices, N and NNZ, respectively, are shown in Table 9. The two "Cube" matrices arise from the same synthetic problem discretized with tetrahedral finite elements and using different levels of refinement. On the other hand, Emilia_923 arises from an industrial application specifically, from a finite element geomechanical analysis. Hence, Figures 2-4 show the obtained results in terms of execution time for 100 iterations of the PCG; the cost of the SpMV products is highlighted in blue. The execution times presented include host software execution, the I/O from/to the FPGA fabric, and the execution of calculations within the FPGA. As expected, the execution time decreases significantly when multiple SpMV CUs are utilized for the calculation. Moreover, PCG performance is closely dependent on that of the SpMV products, as these are the most time-consuming operation of the whole iterative scheme. Nevertheless, the increase in the number of SpMV CUs does not lead to a linear decrease in the execution time of the SpMV calculations as it should. The reason for this is that the x input vector and y output vector are shared between the SpMV CUs. In contrast to the A matrix, which is distributed among different buffers so that each CU can access its portion in parallel, the x and y vectors reside on the same buffer for all CUs. Distributing the x and y vectors is indeed possible, but x requires a copy of the whole vector for each CU, while y needs to be initially distributed to each CU, and then at the end of the SpMV operation to be merged back. Experiments showed that these two operations increase the execution time. The multiple copies of vector x and the distribution and merging of vector y is more expensive in execution time than the effect of the congestion on the shared memory ports. Moving onto comparing the different FPGA-based setups, results show that the embedded-based ZU9EG is significantly slower than the Alveo accelerator cards that are coupled with powerful CPUs. This result is the combination of four factors. Firstly, the red portion of the execution time is affected by the low processing power of the ARM-A53 processor. Secondly, the accelerators are significantly slower as they operate at 150 MHz compared to the 300 MHz clock used on the Alveo cards. Thirdly, the vectorization for these accelerators on the Alveo cards allows eight double precision numbers to be read and processed simultaneously, as the memory interface supports 512 bit-wide interfaces. On the ZU9EG, only two double precision values fit into the 128 bit-wide interface between the FPGA and the processing system and the DDR memory. The SpMV execution time is affected only by the lower operating frequency, as it does not support vectorization. Finally, the utilization of HBM (460 GB/s) on the Alveo cards offers significantly higher memory bandwidths than the DDR memory (19 GB/s) of the ZU9EG. With respect to the comparison among the different Alveo accelerator cards, we observed that the execution times measured are very similar. The architecture of the accelerators is almost the same between the three Alveo cards, with an exception being the number of CUs of the U50 (four CUs instead of five for the rest of the Alveo cards) and the slower dot accelerator on the U280. These two differences, along with the slower CPUs compared to those of the new U55C-based platform, justify the small decrease in performance. Conclusions and Future Work Heterogeneously based HPC systems constitute a highly active field of research. OPTIMA is a research project that pertains to this domain and exercises quality research with emphasis on FPGA-based acceleration towards the HPC goal. This paper presents a set of implementations for the preconditioned conjugate gradient (PCG) method on a number of different FPGA-based systems, i.e., from custom and more research-oriented platforms such as the ExaNeSt heterogeneous system, to academic platforms with official vendor support such as the ETH HACC Platform. The work has shown how the different platforms affect the implementation of the accelerator modules on the reconfigurable hardware. Measurements were performed regarding various performance metrics such as throughput and resource utilization. A conclusion that was common for all the platforms and affected performance is that the restriction of I/O ports limits the parallelization that can be achieved within the FPGA fabric. The problem we are addressing in this work is an I/O-bound problem and would require more I/O resources for optimal execution on these platforms. More importantly, the current work involves the utilization of a single FPGA from each target platform, while future steps will investigate its expansion based on the use of multiple host-FPGA bundles for further performance enhancements.
7,847
2022-09-24T00:00:00.000
[ "Computer Science" ]
Nano-scientific Application of Atomic Force Microscopy in Pathology: from Molecules to Tissues The advantages of atomic force microscopy (AFM) in biological research are its high imaging resolution, sensitivity, and ability to operate in physiological conditions. Over the past decades, rigorous studies have been performed to determine the potential applications of AFM techniques in disease diagnosis and prognosis. Many pathological conditions are accompanied by alterations in the morphology, adhesion properties, mechanical compliances, and molecular composition of cells and tissues. The accurate determination of such alterations can be utilized as a diagnostic and prognostic marker. Alteration in cell morphology represents changes in cell structure and membrane proteins induced by pathologic progression of diseases. Mechanical compliances are also modulated by the active rearrangements of cytoskeleton or extracellular matrix triggered by disease pathogenesis. In addition, adhesion is a critical step in the progression of many diseases including infectious and neurodegenerative diseases. Recent advances in AFM techniques have demonstrated their ability to obtain molecular composition as well as topographic information. The quantitative characterization of molecular alteration in biological specimens in terms of disease progression provides a new avenue to understand the underlying mechanisms of disease onset and progression. In this review, we have highlighted the application of diverse AFM techniques in pathological investigations. Introduction AFM has emerged as a powerful nanoscopic platform to investigate various biological systems due to its applicability in physiological conditions. Unlike other scanning probe microscopic tools, AFM can be operated under physiological conditions. In addition to its sub-nanometer resolution and pico-newton force sensitivity, AFM is capable of recognizing single molecular events, compositional changes, and intercellular interactions occurring in heterogeneous biological systems during disease progression ( Fig. 1) [1,2]. Many pioneer studies have investigated the possibility of utilizing AFM as a nano-diagnostic tool to establish unbiased quantitative assessment rubrics to monitor pathological conditions [3]. Comparative studies between healthy and pathologic specimens were performed using AFM. Nanoscopic structures, mechanical properties, and single molecular events were directly observed in cells and tissues via a minimally invasive surgical intervention [4]. Diseases usually evolve as a result of unwanted morphological alterations. In order to develop therapeutic interventions, it is important to identify the causes of disease-related phenotypes. AFM is capable of revealing disease-related structural changes [5,6]. It obtains topographic images based on the interatomic potential between the AFM probe and the samples. As the probe scans across the sample surface, the interatomic potentials move the cantilever in the perpendicular direction in order to keep a probe-sample interaction force constant. According to the cantilever's movement, the topography reflecting the contours of the sample surface is generated [7,8]. In addition, AFM as a nano-indenter is used to measure the mechanical compliances of biological samples. Cells actively reorganize their internal structure, leading to the alteration of their mechanical properties [9,10]. Many studies reported changes in the mechanical compliances of cells and tissues as pathologic manifestation [11,12]. The simple indentation of the AFM probe onto the sample yields a force-distance curve [13]. By analyzing the obtained force-distance curve using mathematical models, the mechanical compliances, i.e., elastic moduli, were accurately determined. The Hertz model has been the most widely utilized mathematical model for this purpose. The mathematical models modified from the Hertz model have also been adapted in order to resolve the issues raised by the heterogeneous nature of biological samples [14][15][16]. Cell adhesion is essential in biological processes including cell proliferation, migration, and fate [17]. The adhesive strength of cells varies with substratum, topography, and chemo-mechanical properties surrounding the cellular microenvironment. Especially, the molecular composition and the mechanical properties of the extracellular matrix play a key role in cell spreading and migration [18]. Adhesion force is characterized by AFM-based pulloff force measurements in which the AFM probe approached a surface and then subsequently retracted. The pull-off force is defined as the maximum attractive force during the retraction of the tip from the surface [19,20]. Pioneer studies have provided insight into disease-associated alterations in the cell-cell adhesion molecules expressed on the cell surfaces. For instance, the adhesion of ICAM-1, VCAM-1 and integrin VLA-4 on endothelial cells and monocytes is known to be a major contributor to determine the fate of inflammatory diseases [21][22][23]. The microbial adhesion on the enamel surfaces causes periodontal infection such as dental caries and cavities [21]. Moreover, metal ions such as zinc (II) and copper (II) are implicated with increased aggregation of the Aβ peptide into toxic oligomers, accelerating the pathogenesis of Alzheimer's disease [22,23]. Thus, the AFM-based studies attempted to investigate whether the disruption of the cell-cell/ substrate adhesions affects the progression of these diseases [24][25][26][27][28]. The quantitative characterization of intermolecular interaction is essential for a profound understanding of biological processes [29]. AFM probe functionalized with antibodies were utilized to recognize the antigenic sites on the surface of the cell membrane [30]. The extension lengths and the rupture forces were measured from the force-distance curves obtained while the probe retracted from the sample. They reflect the specific binding forces between the antibody attached on the AFM probe and the antigens on the cell surface. The map recognizing these specific binding forces refer to the molecular recognition imaging technique using the AFM. Using this technique, one can recognize antigenic sites on the cell membrane [31]. A variety of biosystems have been investigated by simultaneously obtaining topography and molecular recognition image. Molecular interactions with high specificity including antigenantibody, DNA aptamers, and ligand-receptor pairs have been utilized for this purpose [32,33]. The AFM molecular recognition imaging techniques were utilized to investigate the pathologic conditions such as cystic fibrosis, pseudoexfoliation [34], cystic fibrosis [35], pertussis [36] and neurodegenerative diseases [37], which were known to involve alterations in the molecular composition of cell membrane. Many diseases display numerous pathophysiological modifications including structural or compositional changes, which can be used as diagnostic or prognostic markers [38][39][40][41]. AFM has been adapted to investigate such pathophysiological modifications to understand the underlying mechanism of various pathologic conditions. To do this, it is required to obtain biological specimens including blood samples and surgical specimens from patients and healthy volunteers in the clinical setting. For example, erythrocyte cells were isolated from blood samples in order to investigate hereditary spherocytosis, iron deficiency anemia, sickle cell disease, and type 2 diabetes using AFM [27,[42][43][44][45][46]. In addition, it is relatively easy to obtain tissue samples from asthma patients for AFM studies because bronchial tissues can be collected through routine bronchoscopy [47]. However, studies on some diseases require more painful procedures for specimen collection. For instance, in order to investigate osteoarthritis, it is required to isolate chondrocyte cells from human articular chondrocytes tissues [48,49], and more rigorous surgical approach is required to obtain tissue samples from animals. Human tissues discarded from surgical procedures are also used in AFM studies. Islet tissue was surgically obtained from mice pancreas for AFM study on type 1 diabetes [50]. Similarly, human lens harvested from patients during cataract surgery was used for AFM study on pseudoexfoliation syndrome [51]. In this review, we address the potential of AFM as a clinical diagnostic tool to detect the pathological changes associated with various diseases. So far, there have been advances in the use of AFM for cancer diagnosis and prognosis. In our previous publications, we addressed various aspects of AFM applications in cancer biology [52]. However, in this review, we have focused on other diseases apart from cancer. Morphology Since live erythrocytes were first imaged by AFM, there has been a remarkable breakthrough in cell imaging with AFM under physiological conditions [53]. Many AFM studies have been performed on various kinds of cells under controlled pH and temperature in liquid environments, to avoid unfavorable distortion in images. AFM images provide detailed morphological features such as size, shape and surface topography at sub-nanometer resolutions [54][55][56][57]. In addition, it provides information on cellular architectures such as structural, conformational, and constitutional information of cytoskeletal proteins and membrane lipids [58][59][60][61]. During pathological progression, cells often undergo morphological modulations. Due to its high resolution, AFM can visualize early subtle changes in cell morphology prior to significant pathological conditions beyond the detection limit of other microscopic investigations. Morphological changes can also be observed with AFM shortly after therapeutic intervention, and thus, therapeutic efficacy can be evaluated by AFM images. A healthy erythrocyte has a biconcave disk shape with a very shallow center. A disorder in erythrocytes can be detected by monitoring the shape, size, membrane proteins, number, and hemoglobin contents [62,63]. Abnormalities found in erythrocytes are pathological indicators in many diseases such as hereditary spherocytosis (HS) [64], anemia [65], and malaria [66]. Physicians clinically diagnose anemia by measuring the number of erythrocytes and the amount of ferritin -an iron-containing protein in the blood [67]. Clinical guideline for the diagnosis of HS also recommends close monitoring of erythrocytes abnormality. To diagnose diseases associated with abnormal erythrocytes, hematocrit, osmotic fragility, and direct anti-globulin tests are usually carried out to measure the volume as well as the fragility of erythrocytes and determine the antibodies attached to erythrocytes [64,67]. Nevertheless, the aforementioned tests lack specificity and often lead to false positive results, predicting a wide spectrum of clinical disorders [64,[66][67][68]. Several studies suggested that AFM images can serve as a diagnostic alternative tool with higher specificity and accuracy as summarized in Table 1 [54,55,69]. HS is one of the hemolytic disorders caused by congenital defects. Anemia, jaundice, and splenomegaly are clinical symptoms experienced by HS patients. AFM study identified the morphological hallmark in HS patients as small spheroidal erythrocytes with poorly-organized membrane lattice, decreases in height, peak-to-valley distances, and surface roughness [42]. Surgical intervention, such as splenectomy, is a therapeutic strategy that has been adopted to relieve HS symptoms. A comparative study using AFM was performed to evaluate the efficacy of splenectomy on HS patients [42]. Interestingly, although this surgical intervention was effective as a remedy for hemolytic anemia and other symptoms found in HS patients, AFM study revealed that there was no morphological restoration of erythrocytes, suggesting the need for a fundamental therapeutic intervention such as allogeneic hematopoietic stem cell transplantation. From the morphological point of view, the pathologic erythrocytes looking different from healthy ones were called elliptocytes appearing in the shape of ovals or elongated rods. AFM images obtained from erythrocytes in patients with iron deficiency anemia showed significant aggregation of membrane proteins. Further deformation was observed on the cell surface showing the swelling of the cell center deviated from the normal biconcave shape [43]. From the study, complete restoration of the morphology of erythrocytes during treatment was proposed as a criterion to determine the appropriate time for treatment termination. The most remarkable superiority of AFM images over other microscopic images is that they are capable of visualizing cellular morphology and the ultramicroscopic structures of the cell membrane. Traditional diagnosis of malaria relies on the microscopic examination of malaria parasitemia from stained blood samples smeared on glass slides [66]. In this microscopic diagnosis, low resolution and dry conditions make it difficult to distinguish malaria parasites from similar species. Consequently, false-positive results, late detection, or absence of standardization in the diagnosis of malaria parasitemia have resulted in increased mortality [66,68,70,71]. Epidemiological and molecular studies have reported that the spectrin-based cytoskeleton of erythrocytes is strongly associated with malaria pathogenesis [72,73]. Spectrin is a cytoskeletal protein on the plasma membrane of erythrocytes, which forms a mesh structure by associating with actin filaments, and thus maintains plasma membrane integrity. During the progression of malaria pathogenesis, the erythrocytes infected by the human malaria parasites, Plasmodium falciparum, are expected to undergo substantial changes in membrane integrity and deformability for effective transmission to mosquitoes [74][75][76]. Indeed, an AFM study confirmed the appearance of "knobs", the assembly of adhesive proteins on the membrane of infected erythrocytes, the elongation of spectrin filaments, and the enlarged spectrin mesh during the progression from the ring (early) and trophozoite (growing) stages to the schizont (dividing) stages [77]. Recent AFM study, involving coarse-grained molecular dynamics simulation, explicitly showed the reversible modulation of the spectrin-actin network [78]. An AFM study investigated changes in the morphology of host cells during viral infection. The virus entered the host cells through physical adhesion and engulfment to the host cells. The inevitable morphological and mechanical modulations of the host cells were anticipated during the viral infection. A significant protrusion and softening of the cell membrane, attributed to the viral infection, were directly revealed by the AFM topographic images [79]. The study reported that the different sizes of membrane protrusion were associated with exocytosis of the protein structures and the progeny virus. Despite the numerous advantages, AFM imaging technique is limited by time resolution; it usually takes about 5 min to obtain a single frame of image. To overcome this problem, high-speed atomic force microscopy (HS-AFM) was invented [80]. The first generation of HS-AFM could capture the images of moving proteins at a rate of 80 ms per frame [80]. With recent advances, the real-time imaging of biological process at a molecular level has been achieved. As an example, myosin V walking along actin filaments was successfully visualized by HS-AFM [81,82]. Indeed, the fast-scan ability of HS-AFM seems very beneficial in the monitoring of the dynamic changes of biological specimens during pathological progression of diseases. An interesting study using HS-AFM has been reported by Watanabe-Nakayama et al [83]. Using HS-AFM, they successfully monitored the dynamic process of fibril formation and elongation of amyloid β-protein (Aβ), a key pathogenic agent in neurodegenerative diseases such as Alzheimer disease. Amyloid fibril accumulation is associated with numerous neurodegenerative diseases [84][85][86][87][88][89][90][91][92][93]. However, the mechanisms by which Aβ accumulation in the brain leads to neurodegeneration remain unclear. The clarification of the fibrillation mechanism, the structural features of the amyloid fibrils, and their physical and mechanical properties are expected to unveil the roles of amyloid fibrils in the progression of a range of conditions from mild cognitive impairments to Alzheimer's disease [94]. HS-AFM images show two different growth modes of Aβ, one producing straight fibrils and the other producing spiral fibrils. The switch between two different growth modes was suggested to be a key step in determining Aβ polymorphisms associated with the pathogenic condition. The AFM was also adapted to witness the effect of compounds or drugs inducing nanoscale morphological modification in single cells. For instances, glutaraldehyde is a common chemical fixative used to preserve cells/tissues for the electron microscopy. Shibata-Seki et al. investigated the effect of chemical fixation of glutaraldehyde on corneal endothelial cells using the AFM. They found that the treatment of glutaraldehyde resulted in shrinkage of the endothelial cells owing to the evaporation of water [95]. Glycans play a key role in physiological and pathological processes by mediating cell-cell and cell-ECM interactions. Glycosylation of the biomaterials influences cell fate such as proliferation, differentiation, and functionality. Figuereido et al. used AFM to investigate the biocompatibility of neoglycosylated films on human SYSH-SY5Y neuroblastoma cell lines [96]. The AFM topographic images showed that the neoglycosylated collagen films exhibited well-defined fibrillary structures, while untreated control had amorphous structures. Furthermore, the human SH-SY5Y neuroblastoma cells had comparable biocompatibility to the neoglycosylated collagen films. Their results suggested that the morphological alteration of the neoglycosylated collagen could modulate the intermolecular and inter-fibrillar interactions of the triplehelical domain of the collagen films resulting in improved biological activity. Antimicrobial peptides are a promising class of antimicrobials that exhibits activity against antibiotic-resistant bacteria, parasites, and viruses. Fantner et al. used HS-AFM to monitor real-time morphological modification induced by the antimicrobial peptide CM15 on living Escherichia coli bacteria cells at a nanoscale resolution. They found that the cell surface changed from smooth to being corrugated after treating with CM15 [97]. AFM has been used to monitor the morphological distortion of cells at nanometer resolution. The high-resolution images provided more quantitative and bias-free diagnostic means beyond the conventional diagnostic assays. Although the use of AFM imaging technique in the observation of biological samples has its own drawbacks, it offers an invaluable potential as a diagnostic tool especially when combined with existing diagnostic tools [98]. Mechanical compliance Many diseases are inherently accompanied by mechanical alteration of tissues and cells. For instance, skin aging induces decrease in skin resilience, which is attributed to changes in the composition and organization of extracellular matrix [99]. Consequently, the mechanical properties of cells and tissues are important indicators of pathologic progression of diseases [100]. Mechanical compliance, also called softness, indicates the flexibility of tissues or cellular materials under external stress. The extent of mechanical compliance is often expressed as elastic moduli, i.e., Young's moduli, which can be calculated from stress-strain relation. A high Young's modulus indicates a low mechanical compliance [101]. Over the past decades, various techniques have been developed to investigate the mechanical compliances of biological samples such as cells and tissues. These techniques include optical stretcher [102][103][104][105], micropipette aspiration [106], microfluidics [107], magnetic tweezers [108][109][110], and AFM [111][112][113][114]. Among these, AFM has been the most widely utilized in bio-mechanical assays [13,52,[115][116][117][118]. A simple indentation experiment was carried out to determine the elastic moduli of samples. Some attempts were made to obtain both the storage and loss moduli of biological samples by superimposing the oscillating motion on the probe while indenting the samples [113,115,119]. For the clinical applications of AFM techniques, there have been an increasing number of studies investigating tissues obtained from a minimally invasive surgical intervention such as biopsy [47,50,120]. One of the long-standing problems associated with probing tissues, in comparison with cells, is the technical difficulty associated with immobilizing tissue samples on hard substrates in liquid environments. To overcome this problem, a novel method was adopted for the efficient immobilization of tissues. Nanopillars of "bed of nails"-like approach to anchor pancreatic islet demonstrates an exemplary strategy to achieve proper immobilization of tissues [50]. In addition, comparative studies that evaluated changes in elastic moduli with regards to storage and buffer conditions have been carried out to establish the standard conditions for AFM nano-mechanical studies on tissues [47]. Table 2 summarizes AFM-based bio-mechanical reports on changes in elastic moduli attributed to disease onset and progression. Interestingly, while samples from diseases such as sickle cell disease and cardiovascular complications showed increase in Young's moduli compared to healthy samples, other samples from diseases such as asthma, osteoarthritis, and diabetes conversely showed reduced mechanical integrity. Sickle cell disease is an inherited disorder caused by genetic mutation in the hemoglobin. AFM indentation experiments performed on human erythrocytes harvested from patients with sickle cell disease showed that pathological erythrocytes is about three times stiffer than normal cells [44]. The seven stranded polymer structure of hemoglobin S and changes in the affinity of spectrin and actin filaments were suggested to be responsible for the increased stiffness of sickled erythrocytes. Mechanical stiffening has also been noted as a hallmark of cardiovascular complications [119,121]. A recent AFM study directly observed a significant increase in the elastic moduli of ventricular tissues freshly harvested from mice with pressure overloadinduced cardiac hypertrophy [121]. Cardiac hypertrophy, a leading cause of cardiac complicationinduced death, is characterized by the abnormal enlargement and thickening of the myocardium. The study also reported that increase in the elastic moduli of hypertrophic myocardium enhances the production of vascular endothelial growth factor through PI3K/Akt signaling pathway, thus facilitating angiogenesis during the progression of cardiac hypertrophy to heart failure. The mechanical stimuli from the stiffened matrix were found to be mediated by talin 1 and integrin β1. The findings of the study provided information not only on the direct quantification of mechanical changes in the myocardium but also on novel pharmacological interventions to slow down the detrimental progress of cardiac hypertrophy. Moreover, age-related aortic stiffening, another cause of heart failure, was quantitatively evaluated by an AFM-based biomechanical assay [119]. The study was unique because frequency modulated atomic force microscopy (FM-AFM) was used to determine both the storage and loss moduli in localized regions. The study reported an age-induced elevation in elastic moduli, which was more prominent in the interlamellar regions than in the lamellar regions. Interlamellar regions are composed of complex meshwork of collagen fibers, elastin fibers, and smooth muscle cells, of which major rearrangements occurred due to aging, and thus their mechanical alterations were more severe than those of other regions. Age-related mechanical degradation of human chondrocytes has been reported by an AFM study [48]. In this study, it was observed that old chondrocytes had three-fold lower stiffness than normal counterparts. There was a similar observation with sodium nitroprusside (SNP)-induced chondrocyte apoptosis, a typical osteoarthritis model; SNP-treated chondrocytes showed remarkable decrease (90%) in elasticity [49]. Chondrocytes are embedded in the extracellular matrix which is composed of collagens, proteoglycans, and glycoproteins to form articular cartilage. The aging of the articular cartilage results in cartilaginous degeneration such as osteoarthritis [122]. The mechanical disintegration observed in old chondrocytes is strongly associated with distorted macromolecular framework attributed to the aging process, leading to the damage or death of chondrocytes [123,124]. Strikingly, the mechanical disintegration of aged chondrocytes is seemingly contrary to the age-dependent mechanical modulation of human articular cartilage. Stolz et al. reported the age-dependent-stiffening of articular cartilage with progressive decrease in glycosaminoglycan contents [125]. However, the elastic moduli observed in osteoarthritis patients confirmed the progressive softening of articular cartilage. The age-dependent stiffening of human articular cartilage is overruled by the progressive softening found in osteoarthritis. Again, the mechanical softening of articular cartilage is attributed to the disintegration of collagen meshwork. In addition, such mechanical modulation of articular cartilage was apparent not at micrometers but at nanometers of indentation depth. This requirement of nano-manipulation suggests that the AFM-based indentation technique might serve as a pre-symptomatic diagnostic tool for osteoarthritis. A study has demonstrated that the elastic modulus determined by AFM indentation experiments can be utilized as a progressive disease marker during the treatment of osteoarthritis [49]. The study also reported the preventive effect of resveratrol on SNPtreated chondrocytes. Resveratrol is a polyphenol, derived from some fruits such as grapes, and an anti-inflammatory agent [49]. The elastic moduli, measured by AFM, showed that pretreatment with resveratrol prevents chondrocytes from undergoing SNP-induced mechanical disintegration. Immunofluorescent images revealed that such mechanical modulations resulted from the active reorganization of the actin cytoskeleton induced by resveratrol treatment. It is also fascinating that AFM studies have also reported close correlation between inflammatory diseases, such as asthma, and the mechanical softening of tissues [47,50]. The mechanosensitive production of insulin from islets was confirmed by AFM-based nano-indentation experiments performed on transgenic DORmO mouse model of type 1 diabetes [50]. The study showed that autoimmune insulitis resulted in mechanically soft islets. The intraislet accumulation of hyaluronan prior to the onset of diabetes was considered as a major cause of such mechanical changes. Hyaluronan, a polymer in the extracellular matrix is highly hygroscopic, and thus, its increased accumulation promotes hydration and softening of tissues. Asthma is a hyperresponsive complication in the airway characterized by chronic inflammation. Structural remodeling of bronchial walls includes aberration in the extracellular matrix composition, increase in collagen type I, III, and V, as well as fibronectin, and decrease in collagen type IV [126,127]. Lately, AFM studies on tissues from bronchial biopsies reported lower elastic moduli in airway tissues collected from asthma patients than those in tissues collected from healthy volunteers [47]. Although the major determinants contributing to the reduced mechanical stiffness of bronchial tissues in asthmatic patients are yet to be determined, AFM nano-indentation has emerged as an early and quantitative diagnostic tool for asthma, a respiratory complication with various symptoms. Remarkably, an AFM nano-indentation study has been conducted to investigate fundamental information required to fight against intractable diseases such as human immunodeficiency virus (HIV) infections and nerve injury. The study provided quantitative evidences to depict the mechanically switching behavior of HIV during the infection process [128]. A stunning switch between softening and stiffening behaviors took place from viral budding to viral entry into the host cells. More detailed investigation was conducted to address the mechanical stability of HIV-1 capsid using AFM, revealing the mechanical hardening of hyperstable mutants. Mutations modulating capsid stability are known to largely affect HIV infectivity. Similarly, AFM experiments were carried out to depict the fundamental mechanism of axonal degeneration due to nerve injury or compression. A state-of-the-art study combining microfluidics with AFM was performed to determine the threshold force required to compromise axonal survival after compression [129]. The study showed that rat hippocampal axons fully recovered axonal transport with no detectable axonal loss when compressed with pressure up to 65 ± 30 Pa for 10 min. Whereas, the dorsal root ganglia axons, which showed 20% lower elastic modulus than hippocampal axons, resisted pressure up to 540 ± 200 Pa. It was suggested that the integrity of axonal cytoskeleton mainly affects axonal fate after damage. AFM-based force spectroscopy, combined with fluorescence microscopy, has also been used to determine changes in neuronal stiffness during neurite outgrowth [130]. Fluorescence images indicated that the organization of microtubules is a major component associated with neuronal stiffness. In addition, the AFM study explicitly determined that the growth cones of axotomized neurons underwent mechanical softening during sciatic nerve injury [131]. Although it is too early to determine if AFM demonstrates diagnostic advantages for HIV and nerve injury, the obtained information from AFM studies would be crucial to the improvement of pharmacological interventions for the treatment or prevention of these intractable diseases. Each of the aforementioned diseases has its own underlying mechanism that causes the corresponding mechanical alterations. Nevertheless, each disease generally involves the aberrant reorganization of actin cytoskeleton or the extracellular matrix. Depending on the extent of hydration, the organization and composition of the extracellular matrix or the actin cytoskeletal proteins are modulated during disease onset and progression, and changes in elastic moduli appear in two different directions of the stiffness spectrum as shown in Table 2. Even the reversible switching behavior was observed in HIV infection. As shown in osteoarthritis, the interplay between mechanical changes in cells and their microenvironments collectively affect the mechanical behavior of tissues during disease progression. Herein, we show that AFM-based biomechanical studies are promising in the early diagnosis of diseases because they are able to detect the pre-symptomatic changes in the mechanical properties of cells and tissues in many pathological conditions. Adhesion properties Cell adhesion plays an important role in cell communication and regulation. The mechanical interaction between a cell and its extracellular matrix (ECM) controls cellular behavior and functions. The alteration of cell adhesion can be a defining event for the onset of numerous diseases such as type 2 diabetes, neurodegenerative diseases, osteoarthritis, cardiovascular diseases and sickle cell anemia [132][133][134][135][136]. Tremendous efforts have been made to develop various techniques to quantitatively determine cell adhesion [137]. Recently, AFM has been extensively utilized to determine the adhesive properties of cells [31,135,136,[138][139][140][141]. Simple studies using the AFM-based adhesion assay were performed to investigate bacterial adhesion on dental surfaces [28,142]. Bacterial adhesion is considered as a primary cause of periodontal diseases such as dental caries and cavities [143]. For more than half a century, the fluoride treatments of teeth have been carried out in order to prevent dental caries. An AFM study revealed that the reduced bacterial adhesion on enamel surfaces is a key factor associated with the cariostatic effect of fluoride treatment [142]. Furthermore, dental restorative materials such as composite resin Amelogen® and dental alloy often attract bacterial adhesions, resulting in the formation of secondary caries. AFM-based force spectroscopy evaluated the adhesion forces of cariogenic pathogens such as Staphylococcus aureus on dental restorative materials as shown in Table 3. The finding of the study suggested that surface roughness and free energy on initial staphylococcal adhesion forces are the main characteristics to be considered for dental restorative materials. Table 3, AFM-based adhesion assays investigate how various pathologic conditions lead to changes in the adhesion properties of cells. First, it has been shown that aging and diabetes increase the adhesion properties of erythrocytes [45]. Type 2 diabetes is a metabolic disorder with high sugar levels in the blood owing to insulin resistance or deficiency [144][145][146]. It has been postulated that high level of glucose in the blood enhances viscosity and aggregation in the membrane of erythrocytes. The increased adhesion properties of the erythrocytes of old people explains why elderly persons are more vulnerable to vascular diseases including diabetes. Remarkably, the increase in the adhesion properties of erythrocytes found in patients with type 2 diabetes were more significant than the changes observed in old healthy people. As summarized in Unlike the diabetes-induced changes in adhesion properties, some diseases such as osteoarthritis lead to decrease in the adhesion properties of cells [48]. Clinically, osteoarthritis (OA) is diagnosed with radiography [147]. Radiographic examination provides information on bony changes that occur during OA prognoses such as osteophyte formation, subchondral sclerosis, asymmetric joint space narrowing, subchondral cysts, and subluxation [147][148][149][150][151]. However, radiography diagnosis often provide a delayed or missed diagnosis of OA [152]. Thus, an AFM-based study was performed to investigate the prognoses of OA [48]. The study showed that the adhesion forces of OA chondrocytes were relatively low and distributed over a narrow range compared to normal chondrocytes (see Table 3). Furthermore, the study revealed a decrease in integrin β1 mediated chondrocyte-ECM interactions in OA, implicating the perturbation of cell matrix in OA. The down-regulated expression of integrin β 1 in OA chondrocytes was observed as the main mechanism behind the reduced adhesion forces of OA chondrocytes. The adhesions detected by bare AFM probe involve the collective interactions between the probe and various proteins on the cell membrane. It is not easy to determine which component among the various adhesion-mediating molecules is responsible for such modulated adhesions observed in diabetes and OA. There were attempts to adopt AFM-based single-molecule force spectroscopy (SMFS) in order to determine specific molecules mediating adhesions in pathologic conditions. The AFM-based SMFS detects single functional receptors on cells and the unbinding force between a receptor and the corresponding ligand. Several pathologic disorders such as sickle cell disease, inflammation, and autoimmune blistering skin disease were investigated using this AFM technique. The sickled red blood cells (RBCs) are known to show increase in adhesions to other RBCs and the endothelium. The enhanced adhesion of RBCs to the endothelium causes a delayed microvascular passage of deoxygenated RBCs, promoting sickling and entrapment of RBCs. Consequently, these series of events initiate vaso-occlusive episodes, which are characteristics of sickle cell disease. An AFM-based adhesion assay identified that increase in the binding events between intercellular adhesion molecule-4 (ICAM-4) and αvβ3 integrin results in the abnormal adhesion of sickled RBCs to endothelial cells [27]. The study also showed that ICAM-4 is activated by cyclic adenosine monophosphate-protein kinase Adependent pathway [27]. The recruitment of leukocytes into injured tissues leads to the progression of inflammation [153]. In the beginning of inflammatory pathogenesis, leukocytes in the blood circulation adhere to vascular endothelial cells and migrate through endothelial cells into the interstitial space. Jaczewska et al. showed that the adhesion of Jurkat cells to stimulated HUVEC monolayers took place remarkably in junctional regions [154]. Furthermore, AFM adhesion mapping revealed that the redistribution of junctional adhesion molecule-A (JAM-A) along junctional regions plays a key role in mediating lymphocyte recruitment to the endothelium and subsequent transendothelial migration under inflammatory conditions. AFM adhesion study was also utilized to investigate the molecular signature of gap junctions associated with autoimmune blistering skin disease such as pemphigus vulgaris (PV) [155]. Desmosomal junctions are cadherin-based intercellular junctions in epithelial tissues and their disruption strongly correlates with the incidence of PV. The study revealed that the incubation of pathogenic antibodies with desmoglein 3 resulted in the disruption of intercellular adhesion and structural changes in human keratinocytes, leading to blister formation. Furthermore, AFM-based adhesion study has been used to determine the effects of trace elements on the progression of neurodegenerative diseases. With trace metals such as copper and zinc, the aggregation and neurotoxicity of amyloid-β (Aβ) is significantly enhanced. The aggregation of Aβ is considered as a major cause of Alzheimer's disease. Recently, Hane et al. used AFM to characterize the kinetic and thermodynamic parameters of the dissociation of an Aβ dimer in the presence of copper and zinc ions [156]. Their results demonstrated that while copper at a nanomolar concentration did not alter the single molecule affinity of Aβ-Aβ, zinc at the nanomolar concentration reduced the Aβ-Aβ affinity. Overall, different studies have indicated that altered cell adhesion is a defining event in the onset and progression of various pathological conditions. The accurate determination of cell adhesion is critical information for the diagnosis and prognosis of diseases. We believe that molecular signatures revealed by AFM-based force spectroscopy provide valuable information not only for the diagnosis of different diseases but also for the development of therapeutic strategies. Molecular recognition imaging The cell membrane is composed of interdependent species of molecules, molecular groupings, and supramolecular entities, which play a crucial role in cell functions such as cell adhesion, cellular communication, tissue development, inflammation, tumor metastasis, and microbial infection [1,157]. Some molecules act as receptors and others as sensors controlling the important cellular processes [158,159]. The malfunction of membrane proteins often results in the onset of diseases. Many diseases including pseudoexfoliation [34], cystic fibrosis [35], neurodegenerative diseases [37], and pertussis [36] are associated with alterations in the molecular composition of cell membrane. Consequently, many therapeutic drugs target the human membrane proteins [160]. Previously, cryoelectron microscopy [161], photoactivated localization microscopy [162], and X-ray crystallography [163] have been used to obtain the molecular structures of the cell membrane. Nevertheless, the images obtained from these techniques provided distorted information due to the pretreatment of the target molecules [2]. Thus, the emergence of AFM offers an exciting methodology to monitor membrane proteins in nearnative physiological conditions. Molecular recognition imaging technique using AFM combines molecular recognition with force microscopy [32,164]. The obtained images successfully show the chemical composition of the sample as well as its topographical structures. This technique allows molecular recognition with concentrations below the detection limits of current technologies such as enzyme-linked immunosorbent assay, mass spectroscopy, and protein microarray [165,166]. In order for AFM to detect specific targets, the AFM probe has to be functionalized with a molecule of high affinity [33,167]. The force applied to break the chemical bond between the probe and the target molecule is quantitatively determined from the force-distance curve [168][169][170]. Previous studies have demonstrated that a silicon nitride cantilever tip functionalized with DNA aptamers and cyclo-RGD peptides is able to detect their cognate α-thrombin, IgE molecules, and integrin α5β1 with high accuracy upto ~90 % [33,171]. The cystic fibrosis transmembrane conductance regulator (CFTR) is a channel localized on the apical membrane of the epithelial cells lining exocrine glands. The CFTR maintains the salt and water balance in the epithelium and regulates cell volume [172,173]. CFTR dysfunction results in a severe disease known as cystic fibrosis (CF), characterized by impaired epithelial transport in the respiratory system, liver, and pancreas. The prevalent alteration is the deletion of the amino acid phenylalanine at position 508, subsequent misfolding, impaired trafficking to the membrane, and the reduced number of CFTR on the plasma membrane [174][175][176]. Clinically, a sweat test is used to measure the level of the chloride ions in sweat via quantitative coulometric test or chloride titration test. However, the sweat test method is highly unreliable because of insufficient production of sweat [177]. Ebner et al. used the AFM molecular recognition imaging technique to investigate CFTR in erythrocytes membrane at the single-molecule level [46]. While the normal human erythrocytes have high permeability to chloride [178,179], the erythrocytes of CF patients with the F508del mutation showed about 30 % decrease in CFTR on the plasma membrane. The AFM molecular recognition imaging technique was also adapted for the monitoring of protein aggregation on the human lens capsule [180]. Pseudoexfoliation syndrome (PEX) is charaterized by the deposition of whitish-grey extracellular fibrins in the anterior lens capsule, leading to irreversible blindness [181][182][183]. PEX is usually diagnosed by slit lamp using biomicroscopy [184]. However, biomicroscopy technique cannot provide information on protein changes that cause the onset of PEX, to facilitate the development of treatment modalities. The AFM molecular recognition images obtained by Creasey et al. identified the localized distribution of clusterin, one of the proteins implicated in PEX, while indicating that clusterin did not follow a specific distribution pattern observed on normal lens capsules [51]. The distribution pattern occurred due to the aggregation of misfolded proteins in PEX, leading to a chaperone response by clusterin. This investigation shows the feasibility of the use of AFM molecular recognition imaging technique to detect pathological alteration of biological tissues. Interestingly, the nicotinic acetylcholine receptor (nAChR) on neurons from the ventral respiratory group was monitored by AFM molecular recognition imaging technique [185]. nAChR is a member of the ligand-gated ion channels in the central and peripheral nervous systems [186]. The functional alteration in the α7 nAChRs has been implicated in the abnormal function of cells such as cell replication and differentiation, axonogenesis/synaptogenesis, as well as synaptic function and behavior [186,187]. AFM probe was conjugated with anti-α7 subunit nAChR antibody, which interacts with the surface of NK1-R positive neurons. Acute exposure to nicotine caused an 80% decrease in the binding ability of nAChR antibody to the α-7 subunit in a dosedependent manner. The study suggested that nicotine exposure reduced the binding probability of α-7 subunit-containing nAChRs, which correlated with the loss of nicotinic receptor function. AFM molecular recognition technique has been extended into the study of medically important microbes. Pertussis is a toxin-mediated disease; Bordetella pertussis attaches to the cilia of the respiratory epithelial cells and secrets exotoxin. The secreted exotoxin incapacitates the cilia and results in the inflammation of the respiratory tract, which interferes with the clearing of the pulmonary secretion [188]. Bacteria adhesion is the most significant step in the development of bacteria infection. Pertactin, fimbriae, and filamentous haemagglutinin adhesin (FHA) proteins interact with different components of the respiratory epithelium to facilitate the attachments of the cells. FHA participates in the first step of adhesion through recognition domains by adhering to respiratory epithelial cells and macrophages [188][189][190]. In the clinical field, polymerase chain reaction is used for the detection of DNA sequences of B. pertussis [191]. AFM-based force spectroscopy assay was performed to obtain information about the localization and distribution of FHA-mediated adhesions in B. pertussis [192]. The force strength of the recognition events ranged from 50 pN to 900 pN. Cluster and nearest neighbor analysis revealed that the amount of cluster diminished during the time-lapsed imaging but the size of connected clusters markedly increased. It was shown that the active clustering of bacterium nanodomains on the cell membrane is a crucial step in bacterial infection. The study successfully demonstrated the application of AFM-based molecular recognition imaging technique in monitoring the spatio-temporal rearrangements of adhesins at the molecular level. The obtained information might contribute to understanding the basic molecular mechanisms through which bacterial pathogens cause infectious diseases. Here, we understand that AFM molecular recognition imaging technique is widely utilized in the imaging of biological specimens including cells, tissues, and bacteria. It is demonstrated that this technique is capable of nanoscale imaging resolution and time-lapsed imaging ability. In addition, AFM provides highly reproducible, specific, and efficient molecular detection, and consequently, alteration in tissue and cell structures under pathological condition can be closely monitored. The detection of antigens is also more efficient without the lengthy preparation procedures. We anticipate that molecular recognition techniques would be very useful in the development of therapeutic modalities for various diseases. Conclusion In this review, we address the basic techniques of AFM, which have been widely utilized in the detection of pathological conditions. The increasing number of studies adapting AFM as a vital tool in the study of various pathological conditions indicates the intense scientific awareness of its potential. The nanoscale imaging capability has made it possible to detect morphological changes associated with diseases such as hereditary spherocytosis, iron deficient anemia, malaria, and neurodegenerative diseases. Accurate determination of cell/tissue stiffness has improved the early diagnosis of diseases such as sickle cell anemia, asthma, type 1 diabetes, osteoarthritis, cardiovascular diseases, and HIV infection. AFM can also be utilized to quantify the adhesion properties of cells. Diseases such as type 2 diabetes, osteoarthritis, sickle cell diseases, inflammatory diseases, Alzheimer's diseases, and periodontal diseases have been monitored in terms of adhesion properties. The molecular recognition imaging technique has revolutionized exploration of biological specimens at a single molecule level. Remarkably, this technique is a unique force spectroscopic tool which enables us to monitor the spatial distribution of chemical heterogeneity with the nanometer precision. Specific single molecule interactions include an antigen-antibody and a ligand-cell surface receptor bond. These interactions are prevalent in numerous biological processes such as immune response, genome replication and transcription, and infection. Interestingly, molecular recognition imaging techniques, using antibodies and antigens of interest, expand the application of AFM in pathologic investigations with a high degree of sensitivity and specificity. Cystic fibrosis, pseudoexfoliation syndrome, neurodegenerative diseases, and whooping cough have been studied using AFM-based molecular recognition imaging techniques. We expect that the diverse aforementioned AFM techniques would make a tremendous improvement in clinical studies. The molecular information obtained from AFM would facilitate early diagnosis of diseases before they progress to complications which cannot be treated by the current therapeutic modalities.
9,577.4
2020-03-15T00:00:00.000
[ "Biology" ]
1-Nitropyrene Induced Reactive Oxygen Species–Mediated Apoptosis in Macrophages through AIF Nuclear Translocation and AMPK/Nrf-2/HO-1 Pathway Activation 1-Nitropyrene (1-NP), one of the most abundant nitropolycyclic aromatic hydrocarbons (nitro-PAHs), is generated from the incomplete combustion of carbonaceous organic compounds. 1-NP is a specific marker of diesel exhaust and is an environmental pollutant and a probable carcinogen. Macrophages participate in immune defense against the invasive pathogens in heart, lung, and kidney infection diseases. However, no evidence has indicated that 1-NP induces apoptosis in macrophages. In the present study, 1-NP was found to induce concentration-dependent changes in various cellular functions of RAW264.7 macrophages including cell viability reduction; apoptosis generation; mitochondrial dysfunction; apoptosis-inducing factor (AIF) nuclear translocation; intracellular ROS generation; activation of the AMPK/Nrf-2/HO-1 pathway; changes in the expression of BCL-2 family proteins; and depletion of antioxidative enzymes (AOE), such as glutathione peroxidase (GPx), catalase (CAT), and superoxide dismutase (SOD) These results indicate that 1-NP induced apoptosis in macrophages through AIF nuclear translocation and ROS generation due to mitochondrial dysfunction and to the depletion of AOE from the activation of the AMPK/Nrf-2/HO-1 pathway. Introduction 1-Nitropyrene (1-NP) is a nitropolycyclic aromatic hydrocarbon (nitro-PAH), a class of environmental pollutants gen-erated from the incomplete combustion of carbonaceous organic fuels, biomass, and other compounds [1]. 1-NP is a highly specific marker of diesel exhaust. Various studies have detected 1-NP in the environment and in foods, including in soil, road dust, rice, cabbage, and the atmosphere [2,3]. The high lipid solubility of 1-NP allows it to permeate the gastrointestinal system, respiratory system, and skin [4]. 1-NP is one of the most abundant nitro-PAHs in urban ambient air and is a major contributor to mutagenic and carcinogenic effects [5][6][7]. The International Agency for Research on Cancer (IARC) has classified 1-NP as a group 2A carcinogen, indicating that it is probably carcinogenic to humans [8]. The 1-NP exposures experienced by ambient the concentrations in air ranged from 10 to 1000 pg/m 3 in urban areas. The concentration of 1-NP in the rural and remote areas with low traffic intensity ranges from 1 to 100 pg/m 3 in the whole world. Concentrations of 1-NP tend to be higher in winter than in summer [8]. Up to now, there is no evidence showing the human carcinogen induced by 1-NP [8]. However, several diseases caused by the long-term exposure lead to accumulation of 1-NP in the animal model. The liver, lung, and mammary gland carcinomas are induced by 1-NP at the concentration range from 25 to 100 μM/kg for long-term exposure above 12 weeks in animals [8]. Apoptosis plays a significant role in pathogenesis, metagenesis, and tumorigenesis through mitochondrial dysfunction [9]. 1-NP induces apoptosis in liver epithelial Hepa1c1c7 cells, hepatoma HepG2 cells, bronchial epithelial BEAS-2B cells, and type II pulmonary epithelial A549 cells [6,[10][11][12]. Macrophages, which differentiate from monocytes, are a group of mononuclear phagocytes that participate in immune defense against invasive pathogens in heart, lung, and kidney infections [13]. Alveolar macrophages are the predominant resident phagocytes in the alveolar air space. When activated, they defend against inhaled pathogens, such as environmental pollutants and invasive bacteria, and lung trauma [13,14]. The excess activation of macrophages can result in inflammatory responses and lead to cytotoxicity and apoptosis [15,16]. Mitochondrial disruption plays a key role in macrophage apoptosis [14,17]. Several molecular mechanisms participate in mitochondrial disruption which include the expression of the Bcl-2 family; translocations of apoptosis-inducing factor (AIF) and cytochrome c; and depletion of antioxidative enzymes (AOEs), such as glutathione peroxidase (GPx), catalase (CAT), superoxide dismutase (SOD), and heme oxygenase-1 (HO-1) [14,[18][19][20]. Recently, we have reported that cytotoxicity and genotoxicity were induced by 1-NP by poly (ADP-ribose) polymerase-1 (PARP-1) cleavage via caspase-3 and -9 activation through cytochrome c release from mitochondria and its upstream p53-dependent pathway in macrophages [21]. However, no evidence has indicated 1-NP-induced apoptosis in macrophages. Therefore, the current study examined cell viability and apoptosis in macrophages exposed to 1-NP and analyzed the mechanism of action. Cell Culture. The murine macrophage-like cell line RAW264.7 (BCRC 6001) was obtained from the Bioresource Collection and Research Centre (Taiwan). All cells were grown as monolayer cultures at 37°C in 5% CO 2 using DMEM supplemented with 1% penicillin, streptomycin, fungizone, and 10% FBS. Cell passaging was conducted using 0.05% trypsin with 0.53 mM EDTA [20]. After seeding, the cells were incubated with 1-NP at the concentrations of 0, 3, 10, 30, and 50 μM for 6, 12, 24, and 48 h for cell viability assay. On the other hand, the cells were incubated with 1-NP at the concentrations of 0, 3, 10, 30, and 50 μM for 24 h used for other experimental assays. Flow Cytometric Analysis of Necrosis and Apoptosis. Differentiation of apoptosis and necrosis was performed on a BD Accuri C6 flow cytometer (San Jose, CA, USA) using an FITC-Annexin V/PI apoptosis detection kit. The RAW264.7 cells were seeded at a density of 5 × 10 5 cells/well in 24-well plates for 12 h. After replacing the serum-and phenol red-free medium with culture medium, the cells were exposed to 0, 3, 10, 30, and 50 μM concentrations of 1-NP for 24 h. After 10 5 cell collection, the apoptosis and necrosis were identified through dual staining with FITC-Annexin V and PI staining solution in the dark at room temperature for 15 min, as described previously [22]. Early apoptotic cells were Annexin V-positive and PI-negative (FITC-Annexin 2.5. Mitochondrial Membrane Potential (MMP) Assay. Mitochondrial membrane potential (MMP) was assessed using mitochondrial membrane potential assay dye JC-1, according to the manufacturer's protocol, as described previously [22]. After the 5 × 10 5 cells were treated with 1-NP at various concentrations for 24 h, they were washed twice with PBS and incubated with JC-1 dye in serum-free medium for 30 min at 37°C. After washing, the cells were analyzed using the BD Accuri C6 flow cytometer. 2.6. Mitochondrial Permeability Transition Pore (MPTP) Assay. Mitochondrial permeability transition pore (MPTP) was assessed using the commercial assay kit according to the manufacturer's protocol. After the 5 × 10 5 cells were treated with 1-NP at various concentrations for 24 h, they were washed and incubated with MPTP staining dye in serum-free medium for 15 min at 37°C. And then, the cells were incubated with 1 mM CoCl 2 for 15 min at 37°C. After washing, the cells were analyzed using the BD Accuri C6 flow cytometer. Measurement of Intracellular ROS Concentration. Intracellular ROS generation was evaluated using DCFH-DA, per the method of our previous study [14]. After the 5 × 10 5 cells were treated with 1-NP at various concentrations for 24 h, the cells were incubated with DCFH-DA for 30 min at 37°C. After washing with PBS, the fluorescence was measured in a microplate reader at an excitation wavelength of 488 nm and emission wavelength of 515 nm. Cell Fractionation and Western Blot Assay. The levels of protein expression from whole cells and subcellular fractions were measured using western blot assay, per a previously described method [14]. The RAW264.7 cells were seeded at a density of 5 × 10 6 cells/well in a 10 cm dish for 12 h. After replacing the serum-and phenol red-free medium with culture medium, the cells were exposed to 0, 3, 10, 30, and 50 μM concentrations of 1-NP for 24 h. After cell collection, protein from whole cells was extracted in lysis buffer (25 mM Tris-HCl at pH 7.6, 1 mM phenylmethylsulphonyl fluoride, 150 mM sodium chloride, 1% Nonidet P-40, 1 mM sodium orthovanadate, 10% glycerol, 0.1% SDS, and phosphatase and protease inhibitors). The fraction protein, containing cytosol, mitochondria, and nuclei, was isolated from cells using a cytoplasmic and nuclear protein extraction kit and mitochondria extraction kit. The protein content of the supernatant was determined using Bradford assay. Equal amounts of proteins were incubated with 5X sample buffer, separated by 7.5%-12.5% SDS-PAGE, and electrophoretically transferred onto polyvinylidene difluoride membranes. The membranes were blocked with 5% skimmed milk for 1 h at room temperature. They were then incubated with the indicated primary antibodies (AIF, Bcl-2, Bcl-xL, Bad, Bax, Bid, HO-1, Nrf2, P-AMPK, AMPK, and β-actin) with 0.5% skim milk overnight at 4°C and then with the secondary antibody for 1 h at room temperature. Finally, the membranes were visualized with protein densitometry analysis using the electrochemiluminescence (ECL) detection system. 2.9. Measurement of AOE Activities. The RAW264.7 cells were seeded at a density of 10 6 cells/well in 6-well plates for 12 h. After replacing the serum-and phenol red-free medium with culture medium, the cells were exposed to 0, 3, 10, 30, and 50 μM concentrations of 1-NP for 24 h. After cell collection, the AOE activities which include GPx, CAT, and SOD were assayed with the respective detection kits according to the manufacturer's instructions [14]. Statistical Analysis. Data of the results were representative of three independent experiments in western blot assay, fourth independent experiments in measurement of AOE activities, fifth independent experiments in cell viability assay, necrosis and apoptosis analysis, MMP assay, and measurement of intracellular ROS concentration. The values of the results were representative in terms of the mean ± standard deviation (SD). All data were analyzed in SPSS software. Multiple group comparisons were performed using one-way ANOVA followed by Bonferroni's post hoc test. P < 0:05 indicated statistical significance for all tests. Effects of 1-NP on Subcellular Fraction Translocation of AIF in RAW264.7 Macrophages. The translocation of AIF from the mitochondria to the nucleus is critical in the caspase-independent mitochondrial apoptosis pathway. The levels of AIF in the mitochondria and nucleus were measured using western blot assay in RAW264.7 cells incubated with 1-NP. Levels of AIF in the mitochondria were reduced by 1-NP in a concentration-dependent manner and significantly increased at 10 μM (P < 0:05; Figure 4). The effects of 1-NP on AIF levels in the nucleus were concentration dependent, with AIF significantly increased at 10 μM (P < 0:05). Effects of 1-NP on Expression Level of Bcl-2 Family Proteins in RAW264.7 Macrophages. This study examined the regulation of mitochondrial integrity by Bcl-2 family proteins, with particular attention to the controlled release of AIF and ROS involved in caspase-independent cell death [23,24]. The effects of 1-NP on the expression of Bcl-2 family proteins in RAW264.7 cells are illustrated in Figure 5. The levels of Bcl-2 and Bcl-xL were reduced by 1-NP in a concentration-dependent manner, and a significant effect was indicated at concentrations ≥ 10 μM (P < 0:05). By contrast, the levels of Bad, Bax, Bid, and tBid were increased by 1-NP in a concentration-dependent manner and a significant effect was indicated at concentrations ≥ 10 μM (P < 0:05). Moreover, the value of Bax/Bcl-2 ratio was increased by 1-NP in a concentration-dependent manner and significant effect indicated at concentration ≥ 10 μM (P < 0:05). 3.6. Effects of 1-NP on Intracellular ROS Generation in RAW264.7 Macrophages. Intracellular ROS generation results in apoptosis through mitochondrial dysfunction [14]. After RAW264.7 cells were treated with 1-NP for 24 h, the intracellular ROS generation increased in a concentration-dependent manner and a significant effect was indicated at concentrations ≥ 10 μM (P < 0:05, Figure 6). Effects of 1-NP on AOE Activities in RAW264.7 Macrophages. The activation of AOEs including GPx, SOD, and CAT plays a critical role in the control of intracellular ROS generation [14]. The activation of AOEs in RAW264.7 cells treated with 1-NP at various concentrations for 24 h was monitored using AOE assay kits. GPx, SOD, and CAT activities were induced by 1-NP in a concentrationdependent manner, and a significant effect was noted at concentrations ≥ 10 μM (P < 0:05; Figure 7). 3.8. Effects of 1-NP on the AMPK/Nrf-2/HO-1 Pathway in RAW264.7 Macrophages. HO-1 is an antioxidative protein involved in the resolution of inflammation. Its expression is regulated by the nuclear translocation of Nrf-2. The nuclear accumulation of Nrf-2 is induced by AMPK phosphorylation. After RAW264.7 cells were treated with 1-NP for 24 h, AMPK phosphorylation, Nrf-2 expression, and HO-1 expression were induced by 1-NP in a concentrationdependent manner; a significant effect was noted at concentrations ≥ 10 μM (P < 0:05; Figure 8). Discussion Air pollution can harm the environment in the forms of haze, acid rain, eutrophication, wildlife injury, and global climate Oxidative Medicine and Cellular Longevity change. Around the globe, diesel exhaust is a major contributor to air pollution, which can cause health problems, such as allergies, neurodegenerative diseases, and cardiovascular disease [25][26][27]. 1-NP and its urinary metabolites have been proposed as markers for diesel exhaust from traffic-and factory-related diesel particulate matter [28]. The mutagenic capability of 1-NP is reduced by alveolar macrophages through phagocytosis [29]. A previous study proposed that the cellular viability of RAW264.7 cells was weakly but significantly reduced by 1-NP exposure at 80 nM for 24 h [30]. In our previous study, it was found that cytotoxicity was induced by 1-NP in the concentration-and time-dependent manner. The induction was significant when the cells were treated with 3 μM 1-NP for 48 h or 10 μM 1-NP for 6 h [21]. The results from the present study support existing evidence that 1-NP reduces the viability of RAW264.7 cells. Furthermore, our data suggest that 1-NP reduces the viability of RAW264.7 cells in a concentration-and time-dependent manner. Apoptosis is a major form of cell death and occurs as a defense mechanism of the immune system when cells are exposed to harmful substances [31,32]. Previous studies have shown that 1-NP causes apoptosis in human alveolar-basal epithelial A549 cells, human bronchial epithelial BEAS-2B cells, and mouse hepatoma Hepa1c1c7 cells [12,33,34]. Necrosis is a type of irreversible cell injury and results in cell death [32]. Previous studies have found that 1-NP causes necrosis in Hepa1c1c7 cells and BEAS-2B cells [33,34]. Our results also indicate that 1-NP induced apoptosis and necrosis in RAW264.7 cells. Moreover, 1-NP-induced apoptosis was observed in RAW264.7 cells at a lower concentration than 1-NP-induced necrosis. The extent of apoptosis, including early-and late-phase apoptosis, was higher than the extent of necrosis. After RAW264.7 cells were treated with 1-NP at 10 μM for 24 h, cell viability decreased and apoptosis increased significantly. These results suggest that apoptosis is the major form of cell death in 1-NP-treated RAW264.7 cells. Mitochondrial dysfunction is a critical factor in macrophage apoptosis [20,35]. During mitochondrial dysfunction, the dissipation of mitochondrial membrane potential and loss of mitochondrial membrane integrity were observed in macrophages after exposure to apoptotic stimuli [20,35]. AIF, a mammalian-soluble protein containing flavin adenine dinucleotide, is a nicotinamide adenine dinucleotidedependent oxidoreductase located in the mitochondrial intermembrane space [36]. In physiological conditions, AIF plays a crucial role in mitochondrial bioenergetics. During apoptosis, loss of mitochondrial membrane integrity results in the translocation of AIF from the mitochondria to the nucleus [37]. The degradation complex formed by AIF and related proteins promotes apoptotic DNA damage [36,37]. To the best of our knowledge, no previous studies have proposed that 1-NP decreases the mitochondrial membrane potential in macrophages. However, a previous study reported that the nuclear translocation of AIF from the cytosol to the nucleus occurred after exposure to 1-NP in Hepa1c1c7 cells, as indicated in immunocytochemical analysis [38]. The nuclear translocation of AIF pertains mainly to the elucidation of 1-NP-treated RAW264.7 cells. The present study demonstrated that the nuclear translocation of AIF was induced by 1-NP in a concentration-dependent manner in RAW264.7 cells. These results indicate that 1-NP induces apoptosis through the dissipation of mitochondrial membrane potential and the nuclear translocation of AIF due to the disruption of mitochondrial membrane. The permeabilization of the mitochondrial membrane and the release of intermembrane space proteins (including AIF) are mediated by Bcl-2 family proteins [39,40]. The Bcl-2 family proteins can generally be divided into three groups based on their primary function: antiapoptotic proteins, which include Bcl-2 and Bcl-xL; proapoptotic pore-formers, including Bax; and proapoptotic BH3-only proteins, which include a sensitizer protein (Bad) and activator proteins (Bid and tBid) [41]. After cells are incubated with apoptosis inducers, the activator BH3-only proteins (Bid and 10 Oxidative Medicine and Cellular Longevity tBid) translocate to the mitochondrial membrane and increase their affinity for the pore former, Bax. Bax causes pore formation on the mitochondrial membrane and the leakage of AIF and other soluble proteins from the intermembrane space [39,41]. The interaction between the activator BH3-only proteins and the pore-former protein is suppressed by antiapoptotic proteins (Bcl-2 and Bcl-xL). The sensitizer BH3-only protein, Bad, binds to and inhibits the activities of Bcl-2 and Bcl-xL [39,41]. A previous study proposed that 1-NP induces the mRNA expression of Bax in a concentration-dependent manner in A549 cells [42]. The present study examined the expression of the Bcl2 family in 1-NP-treated RAW264.7 macrophages. We found that 1-NP induced expressions of Bid, tBid, Bax, and Bad in a concentration-dependent manner. By contrast, 1-NP reduced expressions of the antiapoptotic proteins, Bcl-2 and Bcl-xL, in a concentration-dependent manner. Crucially, the parallel trends are suitable in mitochondrial dysfunction, in AIF leakage, and in 1-NP-treated RAW264.7 macrophages. These results indicate that 1-NP induced mitochondrial dysfunction and AIF leakage by changing the expression of Bcl-2 family proteins. Oxidative stress, triggered by mitochondrial dysfunction, has been shown to play a critical role in apoptosis [43,44]. Overgeneration of ROS leads to high oxidative stress and encourages AOE and HO-1 [45,46]. HO-1 degrades heme to biliverdin, which is subsequently converted to bilirubin, an antioxidant that scavenges and neutralizes ROS [46]. SOD catalyzes the reduction of superoxide anions to hydrogen peroxide. GPx and CAT catalyze the reduction of hydrogen peroxide to water and oxygen [45]. Nrf-2 is an important transcription factor that regulates the expressions of AOEs, such as HO-1 and GPx [47]. AMPK is an upstream factor for the reduction of oxidative stress in macrophages [48]. Intracellular ROS generation is induced by 1-NP in the extravillous trophoblast HTR8/SVneo cells, A549 cells, and BEAS-2B cells in Tigriopus japonicus [12,49,50]. To clarify the ROS generation and regulative mechanism induced by 1-NP in macrophages, we measured the production of intracellular ROS in RAW264.7 cells exposed to 1-NP. We found that 1-NP induced ROS generation; reduced AOE activity; and downregulated AMPK phosphorylation, Nrf-2 expression, and HO-1 expression. Based on these findings, we suggest that 1-NP induces ROS by causing mitochondrial dysfunction and reducing AOE activity. Further, we propose that 1-NP induces the activation of the AMPK/Nrf-2/HO-1 pathway to reduce oxidative damage in macrophages. However, there are the limitations in the present study. First, RAW264.7 cells are the mouse macrophage cell line not the human macrophage. Undoubtedly, direct measurement of the toxic mechanism of 1-NP in human macrophage would be ideal, but the sampling of human macrophage raises major ethical concerns and therefore is not suitable for performance. And then, we proposed that the toxic effect of 1-NP was via apoptosis. On the other hand, toxic effect induced by 1-NP might be through other toxic pathways such as ferroptosis, necroptosis, and autophagy. Therefore, we will research on toxic pathways and relative mechanisms in our future studies. Finally, there are few studies to support that clinical disease associated with macrophage toxicity and activity induced by 1-NP. In the future work, we will research on 1-NP-induced macrophage dysfunction result in diseases, including atherosclerosis, diabetes, and inflammatory bowel disease in the differential animal models. In conclusion, the present study found that 1-NP treatment led to downregulation of cell viability and upregulation of apoptosis in RAW264.7 macrophages (Figure 9). The findings indicate that 1-NP led to apoptosis by inducing AIF nuclear translocation, which was caused by mitochondrial dysfunction. Our data suggest that mitochondrial dysfunction occurred due to changes in the expression of BCL-2 family proteins. In addition, 1-NP induced ROS generation by reducing AOE activity. Moreover, 1-NP treatment led to the activation of the AMPK/Nrf-2/HO-1 pathway due high levels of oxidative stress. Taken together, these results suggest that 1-NP causes downregulation of cell viability and upregulation of apoptosis due to mitochondrial dysfunction, AIF nuclear translocation, ROS generation, AOE activity reduction, and AMPK/Nrf-2/HO-1 pathway activation. Data Availability The data of this manuscript entitled "1-Nitropyrene Induced Reactive Oxygen Species-Mediated Apoptosis in Macrophages through AIF Nuclear Translocation and Figure 9: Schemes of the mechanism of the 1-NP-induced apoptosis and cytotoxicity in RAW264.7 cells. After RAW264.7 macrophages were incubated with 1-NP, it caused downregulation of cell viability via upregulation of apoptosis. 1-NP induced apoptosis by inducing AIF nuclear translocation, which was caused by mitochondrial dysfunction. Mitochondrial dysfunction induced by 1-NP occurred due to changes in the expression of BCL-2 family proteins, including downregulation of Bcl-2 and Bcl-xL and upregulation of Bad, Bax, Bid, and tBid. 1-NP induced ROS generation by mitochondrial dysfunction and reducing AOE activity. Additionally, 1-NP treatment led to the activation of the AMPK/Nrf-2/HO-1 pathway due to high levels of oxidative stress. These findings suggested that downregulation of cell viability induced by 1-NP via upregulation of apoptosis was due to mitochondrial dysfunction, AIF nuclear translocation, ROS generation, AOE activity reduction, and AMPK/Nrf-2/HO-1 pathway activation.
4,837
2021-07-13T00:00:00.000
[ "Biology" ]
Comparison of Four Tourmalines for PS Activation to Degrade Sulfamethazine: Efficiency, Kinetics and Mechanisms Four types of tourmalines (TMs, S1, S2, S3 and S4) for activating persulfate (PS) to degrade sulfamethazine (SMT) were compared to find the most efficient catalyst. The four TMs were mesoporous materials with abundant functional groups, but were different in terms of size, composition, specific surface area, contact angle, and zero potential point. The removal of SMT in S1, S2, S3 and S4 systems with PS at the optimum reaction conditions ([SMT]0 = 5 mg/L, [PS]0 = 4 mM, [TM]0 = 5 g/L, pH0 = 5, and T = 25 °C) were 99.0%, 25.5%, 26.0%, and 51.0%, respectively, which might be related to the metal content of TM. Although the degradation of SMT in the S1/PS/SMT system was not dominated by SO4•− and •OH, the radicals contributed to the SMT removal in the S2, S3, and S4 systems. 1O2 and holes both contributed to the degradation of SMT in the four systems. The metal at the X position might be related to the generation of 1O2 and holes, while Fe of TM was mainly related to the generation of free radicals, such as SO4•−. Electrochemical impedance spectroscopy tests confirmed that the separation of electrons and holes on the TM surface could be promoted by adding PS and SMT. S1 presented a higher electron-transfer rate than the other three TMs. The PS activation by TM with a high metal content at the X position provided an efficient and low-consumption treatment for antibiotic refractory wastewater. Introduction Tourmaline (TM) is a kind of annular silicate mineral with boron as the characteristic element. It is mainly found in granite pegmatite, gas hydrothermal deposits, and metamorphic rocks [1]. According to different characteristics, it can be divided into iron TM, alkali TM, and magnesium TM [2,3]. The general formula of TM is XY 3 Z 6 T 6 O 18 (BO 3 ) 3 V 3 W where X represents Na + , Ca 2+ , K + , or a vacancy and Y denotes Fe 2+ , Fe 3+ , Al 3+ , etc. [4,5]. The dissociation of Na and Ca leaves electrons at the X position. The electric neutrality of TM is maintained by trapping holes, resulting in the formation of an electron-hole structure, which agrees with vacancy formation in NiO [6,7]. The characteristics of TM, such as the abundant mineral components and sufficient surface-active sites, enable it to behave as a catalyst. Being similar to NiO, TM was proved to be able to activate persulfate (PS) to degrade contaminants through a non-radical pathway in our previous study [8]. TM is a common, economical, and eco-friendly natural mineral with complex structure and chemical composition. The activation of PS by TM can overcome the disadvantages of the other PS activation methods, such as high cost and poor practicability [9,10]. Therefore, the TM/PS process deserves comprehensive and in-depth study. However, very little research has been done in this area. Our previous studies on the activation of PS by TM have revealed part of the reaction mechanisms [8]; however, some mechanisms of the process, such as the influence of the metal at X position and the iron content on activating PS, remain unclear. In this study, the four types of tourmalines (TMs) were commercial production provided by Tianjin Hongyan Tianshan Stone Industry Co., Ltd. (Tianjin, China). All used reagents were of analytical grade and were used without further purification. Potassium persulfate, methanol, L-histidine and Nafion solution were purchased from Meryer (Shanghai) Chemical Technology Co., Ltd. (Shanghai, China), Tianjin Concord Technology Co., Ltd. (Tianjin, China), Tianjin Solomon Biotechnology Co., Ltd. (Tianjin, China) and Qingdao Tenglong Microwave Technology Co., Ltd. (Qingdao, China), respectively. NaOH and Na 2 C 2 O 4 were purchased from Tianjin Jiangtian Chemical Technology Co., Ltd. (Tianjin, China). The pH of the solution was adjusted with NaOH and HCl (from Tianjin Chemical Reagents Factory 5) solution. Experimental Procedures The degradation of SMT (5 mg L −1 ) was conducted in a 100 mL Erlenmeyer flask at 25 ± 1 • C. The flask was put in a Thermostatic Water Bath with a rotating speed set at 180 r/min. TM was added to the flask to initiate the reaction. The pH of the working solution was adjusted by the addition of 0.1 M NaOH and 0.1 M HCl solutions. Samples were taken at selected time intervals and an appropriate amount of methanol to quench the reaction was added. Samples taken from the flask were filtered with 0.22 µm membrane to remove the TM particles. The filtered samples were stored in brown vials, refrigerated at 4 • C, and measured within 24 h. Materials Characterization and Sample Analysis Scanning electron microscopy (SEM, JSM-7800F, JEOL, Tokyo, Japan), X-ray diffraction (XRD-7000, Shimadzu, Kyoto, Japan), and Fourier transform infrared spectroscopy (FT-IR, Nicolet IS50, Thermofisher, Waltham, MA, USA) were used to provide information regarding the surface and composition of TMs. A surface area and porosimetry analyzer (Micromeritics ASAP2460, Micromeritics, Norcross, GA, USA) was used to provide information of the pore size and volume and the specific surface area of the TM samples. The contact angles of TMs were measured using a contact angle meter (JC200DM, China). Zero-potential points were measured according to the approaches described by Srivastava et al. [14]. Details of the measurement can be referred to from Text S1 (Zero-potential point measurement) in the Supporting Information (SI). Electrochemical impedance spectroscopy (EIS) analysis was conducted with an electrochemical workstation. FTO coated with TM, saturated calomel electrode, and a platinum electrode were used as the working electrode, reference electrode and counter electrode, respectively. The preparation of the working electrode was conducted following the method (Text S2 Working electrode preparation) proposed by Wang et al. [15]. The concentration of SMT was measured using a high-performance liquid chromatograph (Ultimate 3000, Thermofisher, Waltham, MA, USA) with a C18 column (Details in Text S3 SMT measurement procedure). The results of the S1 system have been presented by Zhang et al. [8]. In this system PS promoted the electron-hole separation on TM and contaminants were degraded mainly through a non-radical pathway [13,16]. In contrast, in the AgVO3/PS system hydroxyl radicals and sulfate radicals were reactive oxygen species in the degradation of organic contaminants. The presence of AgVO3 greatly promoted the decomposition of peroxodisulfate and produced a large number of free radicals [17,18]. We quoted the results of the S1 system from the article for comparison (with permission from Elsevier). All of samples were irregular in shape and aggregated. This aggregation might be caused by the spontaneous polarization and attractive force of the particles [19]. The particle size of S2 was more evenly distributed than that of S1. Small particles of S2 attached to the surface of large particles. The SEM image of S3 was similar to that of S2. The particles of S3 attached to each other and there was accumulation among the particles. S4 had the smallest size, the particles were interdependent, and there was "chain connection" between small particles. EDS was used to analyze the element content of four TM samples (Table S1). S1 contained O (52.2%), Na (1.6%), Mg (6.3%), Si (11.3%), Al (8.0%) and Ca (20.6%) [8]. S2 contained O (45.3%), Na (1.6%), Si (18.5%), Al (16.6%) and Fe (18.0%). S3 had O (46.4%), Na (1.6%), Si (21.9%), Al (15.0%), and Fe (15.1%). S4 contained O (53.9%), Na (0.8%), Si (11.9%) Al (9.2%), Mg (5.2%), Ca (5.3%), Fe (6.4%), Ti (3.8%) and P (3.5%). The element distribution and content on the surface of tourmaline would affect the element detection by EDS Therefore, the elements with low content were not detected, such as B. The amount of Fe followed the order of S2 > S3 > S4 > S1. Fe on TM surface might be the active sites for PS decomposition, however, the amount of Fe did not agree with the SMT removal in four systems. This indicated that iron content was not determinant for the contaminant degradation in the TM/PS process. The amount of Na and Ca on the X sites of TM followed the order of S1 > S4 > S2 = S3, which was in line with the SMT removal, indicating that these two elements might be related to contaminant removal. The dissociation of the metal at the X position induced the generation of electron-hole structure, which enhanced the yield of 1 O2 through the oxidation of O2 •− or O2. The holes could also directly degrade SMT. Therefore, the higher the amount of metal at X position, the faster the removal of Figure 1. SEM images of S1 (a) ×2000 and (b) ×8000, S2 (c) ×2000 and (d) ×8000, S3 (e) ×2000 and (f) ×8000 and S4 (g) ×2000 and (h) ×8000. (a,b) were reproduced from Ref. [8] with permission from Elsevier. The results of the S1 system have been presented by Zhang et al. [8]. In this system, PS promoted the electron-hole separation on TM and contaminants were degraded mainly through a non-radical pathway [13,16]. In contrast, in the AgVO 3 /PS system hydroxyl radicals and sulfate radicals were reactive oxygen species in the degradation of organic contaminants. The presence of AgVO 3 greatly promoted the decomposition of peroxodisulfate and produced a large number of free radicals [17,18]. We quoted the results of the S1 system from the article for comparison (with permission from Elsevier). All of samples were irregular in shape and aggregated. This aggregation might be caused by the spontaneous polarization and attractive force of the particles [19]. The particle size of S2 was more evenly distributed than that of S1. Small particles of S2 attached to the surface of large particles. The SEM image of S3 was similar to that of S2. The particles of S3 attached to each other and there was accumulation among the particles. S4 had the smallest size, the particles were interdependent, and there was "chain connection" between small particles. EDS was used to analyze the element content of four TM samples (Table S1). S1 contained O (52.2%), Na (1.6%), Mg (6.3%), Si (11.3%), Al (8.0%) and Ca (20.6%) [8]. S2 contained O (45.3%), Na (1.6%), Si (18.5%), Al (16.6%) and Fe (18.0%). S3 had O (46.4%), Na (1.6%), Si (21.9%), Al (15.0%), and Fe (15.1%). S4 contained O (53.9%), Na (0.8%), Si (11.9%), Al (9.2%), Mg (5.2%), Ca (5.3%), Fe (6.4%), Ti (3.8%) and P (3.5%). The element distribution and content on the surface of tourmaline would affect the element detection by EDS. Therefore, the elements with low content were not detected, such as B. The amount of Fe followed the order of S2 > S3 > S4 > S1. Fe on TM surface might be the active sites for PS decomposition, however, the amount of Fe did not agree with the SMT removal in four systems. This indicated that iron content was not determinant for the contaminant degradation in the TM/PS process. The amount of Na and Ca on the X sites of TM followed the order of S1 > S4 > S2 = S3, which was in line with the SMT removal, indicating that these two elements might be related to contaminant removal. The dissociation of the metal at the X position induced the generation of electron-hole structure, which enhanced the yield of 1 O 2 through the oxidation of O 2 •− or O 2 . The holes could also directly degrade SMT. Therefore, the higher the amount of metal at X position, the faster the removal of contaminant might be. XRD The XRD of TMs is given in Figure S1 and the images indicated the crystal structure of the four catalysts. The component of TM was greatly influenced by its place of origin. In addition to the characteristic peaks of standard TM, the XRD of S1 also included the peaks of AlSiO 5 , SiO 2 , Ca 2 MgSi 2 O 7 , Na 2 Si 2 O 5 , Na 6 (AlSiO 4 ) 6 , etc. S2 and S3 included the peaks of SiO 2 , and S4 included the peaks of TiO 2 and Ca 2 MgSi 2 O 7 . The presence of the peaks of calcium compounds in XRD agreed with the EDS results. FTIR The FTIR spectra of TM shown in Figure 2 presented the peaks of the vibration of silica tetrahedron, [BO 3 ] triangle, surface hydroxyl group and a triple octahedral cation. XRD The XRD of TMs is given in Figure S1 and the images indicated the crystal structure of the four catalysts. The component of TM was greatly influenced by its place of origin. In addition to the characteristic peaks of standard TM, the XRD of S1 also included the peaks of AlSiO5, SiO2, Ca2MgSi2O7, Na2Si2O5, Na6(AlSiO4)6, etc. S2 and S3 included the peaks of SiO2, and S4 included the peaks of TiO2 and Ca2MgSi2O7. The presence of the peaks of calcium compounds in XRD agreed with the EDS results. BET . Res. Public Health 2022, 19, x FOR PEER REVIEW 5 of 14 of S2, S3, and S4. The positions for v(Si-O-(Al) Si) peak were 728 cm −1 and 540 cm −1 for S1, 708 cm −1 and 648 cm −1 for S2 and S3, and 709 cm −1 and 668 cm −1 for S4. The peaks of δ(Si-O) were at 479 cm −1 for S1 and S2, 503 cm −1 for S3, and 511 cm −1 for S4. Figure 3 presented the N2 adsorption-desorption isotherms and pore structure distribution of the four TMs. Table S2 showed the surface area and pore parameters of four TMs. According to the IUPAC classification, the N2 adsorption-desorption isotherms showed the characteristics of type IV isotherms, with obvious adsorption hysteresis in the pressure regions of 0.2-0.99 P/P0 (S1), 0.5-0.99 P/P0 (S2), 0.5-0.99 P/P0 (S3), and 0.4-0.99 P/P0 (S4), belonging to type H3 [20,21]. Moreover, there was no adsorption termination platform in the high-pressure region of the four samples, indicating that the pore structure was irregular and there were still large pores in the catalysts. Compared with S1, S2, S3 and S4 had wider pore size distributions. The specific surface areas, pore volumes, and average aperture were 5.10 m 2 /g, 0.0100 cm 3 /g and 14.3 nm, respectively, for S1; 3.85 m 2 /g, 0.0110 cm 3 /g and 11.3 nm, respectively, for S2; 3.58 m 2 /g, 0.0110 cm 3 /g and 12.4 nm, respectively, for S3; and 6.99 m 2 /g, 0.021 cm 3 /g and 11.2 nm, respectively, for S4. The pore size of the TMs was mainly below 5 nm (the insets of Figure 3), indicating the TMs used in this study were mesoporous materials. Table S2 showed the surface area and pore parameters of four TMs. According to the IUPAC classification, the N 2 adsorption-desorption isotherms showed the characteristics of type IV isotherms, with obvious adsorption hysteresis in the pressure regions of 0.2-0.99 P/P 0 (S1), 0.5-0.99 P/P 0 (S2), 0.5-0.99 P/P 0 (S3), and 0.4-0.99 P/P 0 (S4), belonging to type H3 [20,21]. Moreover, there was no adsorption termination platform in the highpressure region of the four samples, indicating that the pore structure was irregular and there were still large pores in the catalysts. Compared with S1, S2, S3 and S4 had wider pore size distributions. The specific surface areas, pore volumes, and average aperture were 5.10 m 2 /g, 0.0100 cm 3 /g and 14.3 nm, respectively, for S1; 3.85 m 2 /g, 0.0110 cm 3 /g and 11.3 nm, respectively, for S2; 3.58 m 2 /g, 0.0110 cm 3 /g and 12.4 nm, respectively, for S3; and 6.99 m 2 /g, 0.021 cm 3 /g and 11.2 nm, respectively, for S4. The pore size of the TMs was mainly below 5 nm (the insets of Figure 3), indicating the TMs used in this study were mesoporous materials. Contact Angles The contact angle (θ) was used to characterize the hydrophilicity of the material surface. The solid surface is hydrophilic at θ < 90 • , indicating that the liquid is more likely to wet the solid. The surface of the solid is hydrophobic at θ > 90 • , indicating that the liquid can easily move on the surface. As shown in Figure 4, the contact angles of four TMs were 33.39 • (S1), 22.03 • (S2), 31.32 • (S3), and 32.03 • (S4), indicating that the four samples were hydrophilic and could contact well with the PS solution, which was beneficial to the interface reaction. The electric field of TMs could change the structure of water cluster by changing the hydrogen bond network arrangement [22][23][24], which strengthened the hydrogen bond between the water and TM. Contact Angles The contact angle (θ) was used to characterize the hydrophilicity of the material surface. The solid surface is hydrophilic at θ < 90°, indicating that the liquid is more likely to wet the solid. The surface of the solid is hydrophobic at θ > 90°, indicating that the liquid can easily move on the surface. As shown in Figure 4, the contact angles of four TMs were 33.39° (S1), 22.03° (S2), 31.32° (S3), and 32.03° (S4), indicating that the four samples were hydrophilic and could contact well with the PS solution, which was beneficial to the interface reaction. The electric field of TMs could change the structure of water cluster by changing the hydrogen bond network arrangement [22][23][24], which strengthened the hydrogen bond between the water and TM. Zero-Potential Point In order to compare the electric properties of four TM surfaces and analyze their charged properties in the reaction system, the zero-potential points of the samples were determined. As shown in Figure S2, pHzpc, S1 = 8.4, pHzpc, S2 = 8.0, pHzpc, S3 = 6.7, and pHzpc, S4 = 8.0. At pH < pHzpc, the TM surface was positively charged, which was conducive to contact with S2O8 2− . At pH > pHzpc, the TM surface was negatively charged and had electrostatic repulsion to S2O8 2− . The pHzpc of S1 was higher than that of S2, S3, and S4, indicating that S1 had a wider positive charge range and could have better contact with the negative S2O8 2− . Comparison of Four TMs The efficiency of four TMs in activating PS to degrade SMT was compared and presented in Figure 5. Zero-Potential Point In order to compare the electric properties of four TM surfaces and analyze their charged properties in the reaction system, the zero-potential points of the samples were determined. As shown in Figure S2, pH zpc, S1 = 8.4, pH zpc, S2 = 8.0, pH zpc, S3 = 6.7, and pH zpc, S4 = 8.0. At pH < pHzpc, the TM surface was positively charged, which was conducive to contact with S 2 O 8 2− . At pH > pH zpc , the TM surface was negatively charged and had electrostatic repulsion to S 2 O 8 2− . The pH zpc of S1 was higher than that of S2, S3, and S4, indicating that S1 had a wider positive charge range and could have better contact with the negative S 2 O 8 2− . Comparison of Four TMs The efficiency of four TMs in activating PS to degrade SMT was compared and presented in Figure 5. The removal of SMT within 150 min by PS oxidation only was around 10%, and there was no adsorption removal of SMT by four TMs. The SMT removal increased with the reaction time in S1/PS, S2/PS, S3/PS and S4/PS systems. The removal of SMT at 150 min in the S1, S2, S3 and S4 systems was 99.0%, 25.5%, 26.0% and 51.0%, respectively. The removal followed the decreasing order of S1 > S4 > S2 ≈ S3, in line with the amount of Na and Ca on the X sites of TMs. S1 with the most abundant Na and Ca presented the highest SMT removal. The results indicated that the metal at the X position might have an important effect on the PS activation and degradation of SMT. As depicted in Section 3.4, Na and Ca of TMs dissociated in water and formed "electron-holes". The holes and electrons were separated effectively with the presence of PS and SMT. 1 The removal of SMT within 150 min by PS oxidation only was around 10%, and there was no adsorption removal of SMT by four TMs. The SMT removal increased with the reaction time in S1/PS, S2/PS, S3/PS and S4/PS systems. The removal of SMT at 150 min in the S1, S2, S3 and S4 systems was 99.0%, 25.5%, 26.0% and 51.0%, respectively. The removal followed the decreasing order of S1 > S4 > S2 ≈ S3, in line with the amount of Na and Ca on the X sites of TMs. S1 with the most abundant Na and Ca presented the highest SMT removal. The results indicated that the metal at the X position might have an important effect on the PS activation and degradation of SMT. As depicted in Section 3.4, Na and Ca of TMs dissociated in water and formed "electron-holes". The holes and electrons were separated effectively with the presence of PS and SMT. 1 O2 was generated by the transformation of O2 •− produced by the reduction of O2 with electrons or the oxidation of O2 by holes [25,26]. Both 1 O2 and holes contributed to the SMT degradation. The Effect of pH TM is the only mineral in nature with spontaneous polarity and has the ability to regulate the solution pH. Therefore, the ability of four TMs to regulate the solution pH was compared ( Figure S3). When the initial pH was equal to 2, the presence of S1, S2, S3, and S4 increased the final pH to 8.1, 6.8, 3.0, and 8.0, respectively. At initial pH 5, the final pH of the S1, S2, S3 and S4 systems was 8.4, 8.1, 6.4 and 8.1, respectively. At initial pH 10, The Effect of pH TM is the only mineral in nature with spontaneous polarity and has the ability to regulate the solution pH. Therefore, the ability of four TMs to regulate the solution pH was compared ( Figure S3). When the initial pH was equal to 2, the presence of S1, S2, S3, and S4 increased the final pH to 8.1, 6.8, 3.0, and 8.0, respectively. At initial pH 5, the final pH of the S1, S2, S3 and S4 systems was 8.4, 8.1, 6.4 and 8.1, respectively. At initial pH 10, the final pH of the S1, S2, S3 and S4 systems was 8.3, 8.3, 7.8, and 8.3, respectively. The pH regulation ability of samples S1 and S4 were higher than that of S2 and S3, which was in line with the SMT removal for the four systems. The ability of tourmaline to adjust the pH of acidic solution was mainly derived from two aspects [27,28]: (i) the spontaneous permanent electrical polarity of TM. The electric field on the surface of TM particles could electrolyze water to produce H 2 and hydrated hydroxyl ions, which increased the solution pH. (ii) The surface properties of TM. There were a lot of hydroxyl groups and metal suspended bonds on the surface of the crushed TM particles, which could adsorb or replace H + in aqueous solution, increasing the solution pH. It could be found from the EDS and BET analysis that compared with S1 and S4, S2 and S3 samples had relatively smaller specific surface area and lower content of Na and Ca elements on the surface and, thus, fewer sites that could adsorb or replace H + . Therefore, the final pH of the S2 and S3 systems was stabilized at acid or neutral due to that the two TMs had weak ability to adjust the pH of acidic solution. S1 and S4 with higher content of Na and Ca adjusted the solution pH to alkaline. The pH decrease of the alkaline solution was primarily due to the fact that the metal on the TM surface was likely to adsorb OH − , resulting in surface hydroxylation and solution pH decrease. Thus, the solution pH could be adjusted within a short time after TM was added. CO 2 in the air reached an equilibrium at the gas-liquid interface as the reaction proceeded, resulting in the stabilization of the solution pH. The ability of TM to adjust the solution pH was also affected by the size and dosage of tourmaline, as well as the stirring conditions. The effect of pH on the efficiency of PS activation by TM was investigated and the results were provided in Figure 6. The ability of tourmaline to adjust the pH of acidic solution was mainly derived from two aspects [27,28]: (i) the spontaneous permanent electrical polarity of TM. The electric field on the surface of TM particles could electrolyze water to produce H2 and hydrated hydroxyl ions, which increased the solution pH. (ii) The surface properties of TM. There were a lot of hydroxyl groups and metal suspended bonds on the surface of the crushed TM particles, which could adsorb or replace H + in aqueous solution, increasing the solution pH. It could be found from the EDS and BET analysis that compared with S1 and S4, S2 and S3 samples had relatively smaller specific surface area and lower content of Na and Ca elements on the surface and, thus, fewer sites that could adsorb or replace H + . Therefore, the final pH of the S2 and S3 systems was stabilized at acid or neutral due to that the two TMs had weak ability to adjust the pH of acidic solution. S1 and S4 with higher content of Na and Ca adjusted the solution pH to alkaline. The pH decrease of the alkaline solution was primarily due to the fact that the metal on the TM surface was likely to adsorb OH -, resulting in surface hydroxylation and solution pH decrease. Thus, the solution pH could be adjusted within a short time after TM was added. CO2 in the air reached an equilibrium at the gas-liquid interface as the reaction proceeded, resulting in the stabilization of the solution pH. The ability of TM to adjust the solution pH was also affected by the size and dosage of tourmaline, as well as the stirring conditions. The effect of pH on the efficiency of PS activation by TM was investigated and the results were provided in Figure 6. At an initial pH of 2, the final removal of SMT in the S1, S2, S3, and S4 systems was 99.8%, 66.8%, 99.0% and 55.7%, respectively. At an initial pH 5, the final removal of SMT At an initial pH of 2, the final removal of SMT in the S1, S2, S3, and S4 systems was 99.8%, 66.8%, 99.0% and 55.7%, respectively. At an initial pH 5, the final removal of SMT in the S1, S2, S3, and S4 systems was 99.0%, 25.5%, 26.0%, and 51.0%, respectively. At an initial pH 10, the final removal of SMT in the S1, S2, S3, and S4 systems was 96.5%, 29.6%, 40.6% and 54.8%, respectively. The removal of SMT in the S2, S3, and S4 systems at pH 0 = 10 was slightly higher than that at pH 0 = 5. At pH 0 = 10, the spontaneous polarization of TM maintained the solution pH at alkali. S2, S3, and S4 contained Fe, which could activate PS to produce SO 4 •− and this radical could be converted into •OH under alkaline conditions, promoting the degradation of SMT in the system. Under acidic conditions, the removal of SMT in the S1 system was greatly improved, which might be due to the protonation of tourmaline surface, being conducive to the contact of S 2 O 8 2− with TM. The enhanced SMT removal at pH = 2 in the S2, S3 and S4 systems might be due to the Fe content of the particles and the solution pH variation. The Fe content in the three samples followed a decreasing order of S4 > S2 > S3. At initial pH 2, the solution pH was maintained at around 8.0 (S4), 6.8 (S2), and 3.0 (S3). The acidic pH of the S2 and S3 systems might promote the release of iron to the solution, which could accelerate the activation of PS to produce more SO 4 •− for SMT degradation. The high solution pH of the S4 systems hindered Fe dissolution during the reaction and thus PS activation. In order to confirm the hypothesis, methanol was used to scavenge SO 4 •− and •OH at pH = 2. The SMT removal in the S2, S3, and S4 systems was hindered by the presence of methanol ( Figure S4). The results confirmed the hypothesis that, for the S2, S3, and S4 systems, the acidic condition promoted the Fe dissolution and thus PS activation to produce more SO 4 •− . It could be also concluded that compared with the iron content, the pH variation during the reaction was more important to the SMT removal in the TM/PS systems. Reactive Species In this study, we used radical quenching method to identify the role of free radicals. Methanol (MA) and tert-butyl alcohol (TBA) were used to investigate the radicals generated in the TM/PS systems. The reaction rate constant between MA and •OH was 9.7 × 10 8 M −1 s −1 and 3.8-7.6 × 10 8 M −1 s −1 for the reaction of TBA and •OH. The reaction rate constant was 1.0 × 10 7 M −1 s −1 for MA and SO 4 •− and 4-9.1 × 10 5 M −1 s −1 for TBA and SO 4 •− [29,30]. Therefore, MA could be used to scavenge both •OH and SO 4 •− , while TBA was used to only scavenge •OH. As shown in Figure 7, MA and TBA did not significantly inhibited the degradation of SMT in the S1 system, indicating that neither SO 4 •− nor •OH was the main species for SMT removal. in the S1, S2, S3, and S4 systems was 99.0%, 25.5%, 26.0%, and 51.0%, respectively. At an initial pH 10, the final removal of SMT in the S1, S2, S3, and S4 systems was 96.5%, 29.6%, 40.6% and 54.8%, respectively. The removal of SMT in the S2, S3, and S4 systems at pH0 = 10 was slightly higher than that at pH0 = 5. At pH0 = 10, the spontaneous polarization of TM maintained the solution pH at alkali. S2, S3, and S4 contained Fe, which could activate PS to produce SO4 •− and this radical could be converted into •OH under alkaline conditions, promoting the degradation of SMT in the system. Under acidic conditions, the removal of SMT in the S1 system was greatly improved, which might be due to the protonation of tourmaline surface, being conducive to the contact of S2O8 2− with TM. The enhanced SMT removal at pH = 2 in the S2, S3 and S4 systems might be due to the Fe content of the particles and the solution pH variation. The Fe content in the three samples followed a decreasing order of S4 > S2 > S3. At initial pH 2, the solution pH was maintained at around 8.0 (S4), 6.8 (S2), and 3.0 (S3). The acidic pH of the S2 and S3 systems might promote the release of iron to the solution, which could accelerate the activation of PS to produce more SO4 •− for SMT degradation. The high solution pH of the S4 systems hindered Fe dissolution during the reaction and thus PS activation. In order to confirm the hypothesis, methanol was used to scavenge SO4 •− and •OH at pH = 2. The SMT removal in the S2, S3, and S4 systems was hindered by the presence of methanol ( Figure S4). The results confirmed the hypothesis that, for the S2, S3, and S4 systems, the acidic condition promoted the Fe dissolution and thus PS activation to produce more SO4 •− . It could be also concluded that compared with the iron content, the pH variation during the reaction was more important to the SMT removal in the TM/PS systems. Reactive Species In this study, we used radical quenching method to identify the role of free radicals. Methanol (MA) and tert-butyl alcohol (TBA) were used to investigate the radicals generated in the TM/PS systems. The reaction rate constant between MA and •OH was 9.7 × 10 8 M −1 s −1 and 3.8-7.6 × 10 8 M −1 s −1 for the reaction of TBA and •OH. The reaction rate constant was 1.0 × 10 7 M −1 s −1 for MA and SO4 − and 4-9.1 × 10 5 M −1 s −1 for TBA and SO4 − [29,30]. Therefore, MA could be used to scavenge both •OH and SO4 − , while TBA was used to only scavenge •OH. As shown in Figure 7, MA and TBA did not significantly inhibited the degradation of SMT in the S1 system, indicating that neither SO4 •− nor •OH was the main species for SMT removal. The addition of TBA and MA in the S2 system decreased the SMT removal at 150 min from 25.5% to 20.3% and 12.0%, respectively, indicating that SO4 •− and •OH played almost an equal role in the SMT removal. The SMT removal in the S3 and S4 system in the presence of MA decreased by 7.70% and 30.1%, respectively. However, TBA did not show any inhibiting effect on the SMT removal in the S3 and S4 systems. The results indicated that SO4 •− was the only radical for SMT removal in these two systems at the experimental con- The addition of TBA and MA in the S2 system decreased the SMT removal at 150 min from 25.5% to 20.3% and 12.0%, respectively, indicating that SO 4 •− and •OH played almost an equal role in the SMT removal. The SMT removal in the S3 and S4 system in the presence of MA decreased by 7.70% and 30.1%, respectively. However, TBA did not show any inhibiting effect on the SMT removal in the S3 and S4 systems. The results indicated that SO 4 •− was the only radical for SMT removal in these two systems at the experimental conditions. L-histidine and sodium oxalate were used to capture 1 O 2 and holes on the TM surface, respectively ( Figure 8). The addition of TBA and MA in the S2 system decreased the SMT removal at 150 min from 25.5% to 20.3% and 12.0%, respectively, indicating that SO4 •− and •OH played almost an equal role in the SMT removal. The SMT removal in the S3 and S4 system in the presence of MA decreased by 7.70% and 30.1%, respectively. However, TBA did not show any inhibiting effect on the SMT removal in the S3 and S4 systems. The results indicated that SO4 •− was the only radical for SMT removal in these two systems at the experimental conditions. L-histidine and sodium oxalate were used to capture 1 O2 and holes on the TM surface, respectively ( Figure 8). For the S1 system, SMT removal in the presence of L-histidine and sodium oxalate was 37.4% and 59.9%, respectively, being much lower than that in the control experiment (97.8%). The SMT removal in the S2 system with added L-histidine and sodium oxalate decreased by 10.1% and 16.8%, respectively. For the S3 system, the addition of Na 2 C 2 O 4 did not impose any negative influence on the SMT removal, suggesting that holes on the TM surface did not affect the contaminant removal. The addition of L-histidine to the S3 system decreased the SMT removal by 12%. For the S4 system, the presence of L-histidine and sodium oxalate decreased the SMT removal from 51.0% to 10.9% and 22.5%, respectively. These results indicated that, unlike S3, both 1 O 2 and holes might play important roles in the SMT degradation in the S1, S2, and S4 systems. The surface metal (Na and Ca) of TMs dissociated in water and formed the most "electron-holes". When PS and SMT coexisted in the systems, the holes and electrons could be separated effectively. Oxygen captured electrons to generate O 2 •− , which could be directly recombined to generate 1 O 2 to degrade SMT. Holes could oxidize O 2 to generate 1 O 2 [25,26]. Electrochemical Test EIS was conducted to investigate the charge transfer resistance of the four TMs ( Figure 9). TM surface did not affect the contaminant removal. The addition of L-histidine to the S3 system decreased the SMT removal by 12%. For the S4 system, the presence of L-histidine and sodium oxalate decreased the SMT removal from 51.0% to 10.9% and 22.5%, respectively. These results indicated that, unlike S3, both 1 O2 and holes might play important roles in the SMT degradation in the S1, S2, and S4 systems. The surface metal (Na and Ca) of TMs dissociated in water and formed the most "electron-holes". When PS and SMT coexisted in the systems, the holes and electrons could be separated effectively. Oxygen captured electrons to generate O2 •-, which could be directly recombined to generate 1 O2 to degrade SMT. Holes could oxidize O2 to generate 1 O2 [25,26]. Electrochemical Test EIS was conducted to investigate the charge transfer resistance of the four TMs (Figure 9). The smaller the arc radius in Nyquist plot is, the smaller the charge transfer resistance will be. This will result in faster charge transfer rate and thus a more effective separation of holes and electrons on the TM surface. Compared with TM alone, the Nyquist arc of The smaller the arc radius in Nyquist plot is, the smaller the charge transfer resistance will be. This will result in faster charge transfer rate and thus a more effective separation of holes and electrons on the TM surface. Compared with TM alone, the Nyquist arc of TM in the presence of PS and SMT was smaller, indicating that the addition of PS and SMT promoted the charge transfer. Compared with that of S2, S3, and S4, the Nyquist arc radius of S1 with the addition of PS and SMT decreased the most, indicating the highest efficiency of the separation of holes and electrons, which agreed with the metal content on the S1 surface. This confirmed that the formation of "electron-holes" in the TM/PS/SMT system was related to the dissociation of Na and Ca ions on the TM surface. 3.6. Reaction Mechanisms S1 was produced in Xinjiang and contained the least amount of iron. The reaction mechanism of the S1/PS system can be found by referring to our previous research [8]. S2 sample was produced in Hunan Province, and its iron content was the highest among the four samples. The SMT degradation mechanism of S2/PS/SMT system confirmed by the scavenging experiment and electrochemical test was proposed in three pathways: (i) the surface metal (Na and Ca) of TMs dissociated in water and formed "electron-holes". The holes on the TM surface directly oxidize some SMT. ( and holes all contributed to the SMT removal. Although the Fe content in S2 was higher than in S1, the SMT removal in the S2 system was lower than that in the S1 system, which might be due to the low content of Na and Ca at the X position. Both S3 and S4 samples were produced in Xinjiang, ranking second and third in terms of iron content, respectively. The SMT degradation mechanisms of the S3 and S4 systems were similar to that of the S2 system. However, the radical responsible for the degradation of SMT in these two systems was SO 4 •− rather than •OH. Therefore, SO 4 •− , 1 O 2 and holes were the main species to remove SMT in these two systems. Conclusions Four TMs were investigated in terms of their ability to activate PS to degrade SMT. The contact angles indicated that the four TMs were all hydrophilic and could contact well with the aqueous solution, being beneficial to the real wastewater treatment. The pH zpc of S1 indicated that S1 had a wider positive charge range and could have better contact with negative S 2 O 8 2− . The SMT removal in the four TM/PS systems followed the decreasing order of S1 > S4 > S2 ≈ S3, which was in line with the Na and Ca content in the TMs, indicating the contaminant removal was related to the content of the two metals. The amount of Fe in the TMs followed the order of S2 > S3 > S4 > S1, which would affect the production of SO 4 •− and •OH. For S2, S3, and S4, the release of Fe to the solution was promoted at the acidic condition, which enhanced the activation of PS to produce more SO 4 •− for SMT degradation. The scavenging experiments proved that SO 4 •− and •OH together with 1 O 2 and holes contributed to the SMT removal in the S2/PS systems. SO 4 •− , 1 O 2 and holes were the main species for the SMT removal in the S3/PS and S4/PS systems. The activity of TM was mainly affected by the metal contents at the X position and the Fe content on TM. Metal at the X position determined the non-radical degradation pathway of SMT, while Fe was related to the generation of free radicals, the yield of which was affected by the solution pH. EIS confirmed a faster electron transfer rate in the S1/PS system compared with the other three systems, which might be due to the dissociation of high content of Na and Ca. TM as a green, common, and cheap catalyst might be used in catalyzing oxidants to treat real wastewater.
9,762.2
2022-03-01T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Rainbow polygons for colored point sets in the plane Given a colored point set in the plane, a perfect rainbow polygon is a simple polygon that contains exactly one point of each color, either in its interior or on its boundary. Let $\operatorname{rb-index}(S)$ denote the smallest size of a perfect rainbow polygon for a colored point set $S$, and let $\operatorname{rb-index}(k)$ be the maximum of $\operatorname{rb-index}(S)$ over all $k$-colored point sets in general position; that is, every $k$-colored point set $S$ has a perfect rainbow polygon with at most $\operatorname{rb-index}(k)$ vertices. In this paper, we determine the values of $\operatorname{rb-index}(k)$ up to $k=7$, which is the first case where $\operatorname{rb-index}(k)\neq k$, and we prove that for $k\ge 5$, \[ \frac{40\lfloor (k-1)/2 \rfloor -8}{19} %Birgit: \leq\operatorname{rb-index}(k)\leq 10 \bigg\lfloor\frac{k}{7}\bigg\rfloor + 11. \] Furthermore, for a $k$-colored set of $n$ points in the plane in general position, a perfect rainbow polygon with at most $10 \lfloor\frac{k}{7}\rfloor + 11$ vertices can be computed in $O(n\log n)$ time. Introduction Given a colored point set in the plane, in this paper we study the problem of finding a simple polygon containing exactly one point of each color. Formally, the problem we consider is the following. Let k ≥ 2 be an integer and let {1, . . . , k} be k distinct colors. For every 1 ≤ i ≤ k, let S i denote a finite set of points of color i in the plane. We always assume that S i is nonempty and finite for all i ∈ {1, . . . , k}, and that S = k i=1 S i is in general position (that is, no three points of S are collinear). This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No 734922. For a simple polygon P with m vertices (or a simple m-gon) and a point x in the plane, we say that P contains x if x lies in the interior or on the boundary of P . Given a k-colored point set S = k i=1 S i and a simple polygon P in the plane, we call P a rainbow polygon for S if P contains at most one point of each color; and P will be called a perfect rainbow polygon if it contains exactly one point of each color. The perfect rainbow polygon problem for a colored point set S is that of finding a perfect rainbow polygon with the minimum number of vertices. One can easily check that a perfect rainbow polygon always exists for a colored point set. A way of constructing such a polygon is described below, using the following well-known property for a plane tree: From a tree T embedded in the plane with straight-line edges, a simple polygon can be built by traversing the boundary of the unbounded face of T , placing a copy of a vertex infinitesimally close to that vertex each time it is visited, and connecting the copies according to the traversal order. One can imagine this simple polygon as the "thickening" of the tree. Thus, for a colored point set S, to build a perfect rainbow polygon we can choose one point of each color, form a star connecting one of these points to the rest, and thicken the star; see Figure 1a. Note that the simple polygon obtained in this way can be as close to the star as we wish, so that it contains no other points in S, apart from the points in S that we have chosen. However, finding a perfect rainbow polygon of minimum size for a given colored point set (where the size of a polygon is the number of its vertices) is in general much more difficult. We believe that this problem is NP-complete. Therefore, we focus on giving combinatorial bounds for the size of minimum perfect rainbow polygons. Let rb-index(S) denote the rainbow index of a colored point set S; that is, the smallest size of a perfect rainbow polygon for S. We then define the rainbow index of k, denoted by rb-index(k), to be the largest rainbow index among all the k-colored point sets S; that is, rb-index(k) = max {rb-index(S) : S is a k-colored point set}. (1) In other words, rb-index(k) is the smallest integer such that, for every k-colored point set S, there exists a perfect rainbow polygon of size at most rb-index(k). The two main results in this paper are the following. First, we determine the values of the rainbow index up to k = 7, which is the first case where rb-index(k) = k, namely k 3 4 5 6 7 rb-index(k) 3 4 5 6 8 Second, we prove the following lower and upper bounds for the rainbow index 40 (k − 1)/2 − 8 19 ≤ rb-index(k) ≤ 10 k 7 + 11. Furthermore, for a k-colored set of n points in the plane, a perfect rainbow polygon of size meeting these upper bounds can be computed in O(n log n) time. The rainbow index for small values of k is analyzed in Section 3. In Sections 4 and 5, we provide our upper and lower bounds for the rainbow index, respectively. These bounds are based on the analysis of the complexity of noncrossing covering trees for sets of points, under a new measure defined in this paper. This measure and the relationship between perfect rainbow polygons and noncrossing covering trees is given in Section 2. Related previous work Starting from the celebrated Ham-Sandwich theorem, a considerable amount of research about discrete geometry on colored point sets (or mass distributions) has been done. For instance, Figure 1: (a) Thickening a tree to obtain a perfect rainbow polygon. Different colors are represented by different geometric objects. (b) A noncrossing covering tree for the eight black points that can be partitioned into five segments, s 1 = u 1 u 8 , s 2 = u 2 u 3 , s 3 = u 2 u 4 , s 4 = u 2 u 5 , and s 5 = u 6 u 7 ; and two forks, u 6 with multiplicity 1 and u 2 with multiplicity 2. given cg red points and dg blue points in the plane, where c, d, and g are positive integers, the Equitable Subdivision Theorem establishes that there exists a subdivision of the plane into g convex regions such that each region contains precisely c red points and d blue points [10,34]. It is also known that every d-colored set of points in general position in R d can be partitioned into n subsets with disjoint convex hulls such that the set of points and all color classes are partitioned as evenly as possible [12]. For a wide range of geometric partitioning results, the reader is referred to [6,7,8,10,12,21,24,25,30,34] and the references therein. In addition to geometric partitions, for colored points in the plane some research focuses on geometric structures covering the points in some specific way. For instance, covering the colored points with noncrossing monochromatic matchings [18], noncrossing heterochromatic matchings [26], noncrossing alternating cycles and paths [29], noncrosing alternating spanning trees [11,26] or noncrossing K 1,3 stars [1]. In other papers, the main goal is selecting k points with k distinct colors (a rainbow subset) from a k-colored point set such that some geometric properties of the rainbow subset are maximized or minimized. Rainbow subsets with maximum diameter are investigated in [19,23], with minimum diameter in [20,33], and rainbow subsets optimizing matchings under several criteria are studied in [9]. In addition, several traditional geometric problems for uncolored point sets become NP-hard for colored point sets. For instance, the following problems are NP-complete [23]: Computing a rainbow subset minimizing (maximizing) the length of its minimum spanning tree, computing a rainbow subset minimizing its convex hull, or computing a rainbow subset maximizing the distance between its closest pair. Given a 3-colored point set R ∪ B ∪ G consisting of red, blue, and green points in the plane, a well-known result is that there exists an empty heterochromatic triangle, where the three vertices have distinct colors [15]. In particular, a heterochromatic triangle of minimum area cannot contain any other point from R ∪ B ∪ G in its interior, hence its interior is empty, and its boundary contains exactly one point of each color. This implies that rb-index(3) = 3. Related work [7] deals with colored lines instead of colored points, showing that in an arrangement of 3-colored lines, there always exists a line segment intersecting exactly one line of each color. Aloupis et al. [4] study the problem of coloring a given point set with k colors so that every axis-aligned strip containing sufficiently many points contains a point from each color class. Covering trees versus perfect rainbow polygons Given a set of (monochromatic) points, in this section we derive a lower bound for the size of simple polygons that contain the given points and have arbitrarily small area. We also provide a lower bound for the size of a perfect rainbow polygon for some colored point sets. A noncrossing covering tree for a set S of points in the plane is a noncrossing geometric tree (that is, a plane straight-line tree) such that every point of S lies at a vertex or on an edge of the tree; see Figure 1b. Let T be a noncrossing covering tree whose vertices can be collinear. Similarly to [16], we define a segment of T as a path of collinear edges in T . Two segments of T may cross at a vertex of degree 4 or higher; we are interested in pairwise noncrossing segments. Any vertex of degree two and incident to two collinear edges can be suppressed; consequently, we may assume that T has no such vertices. Let M be a partition of the edges of T into the minimum number of pairwise noncrossing segments. Let s = s(T ) denote the number of segments in M. A fork of T (with respect to M) is a vertex v that lies in the interior of a segment ab ∈ M and is an endpoint of another segment in M. The multiplicity of a fork v is 2 if it is the endpoint of two segments that lie on opposite sides of the supporting line of ab; otherwise its multiplicity is 1. See Figure 1b for an example. Let t = t(T ) denote the sum of multiplicities of all forks in T with respect to M. We express the number of vertices in a polygon that encloses a noncrossing covering tree T in terms of the parameters s and t. If all edges of T are collinear, then s = 1 and T can be enclosed in a triangle. The following lemma addresses the case that s ≥ 2. Lemma 1. Let T be a noncrossing covering tree and M a partition of the edges into the minimum number of pairwise noncrossing segments. If s ≥ 2 and t ≥ 0, then for every ε > 0, there exists a simple polygon P with 2s + t vertices such that area(P ) ≤ ε and T lies in P . Proof. Let δ > 0 be the sufficiently small constant (specified below). For every vertex v of T , let D v be a disk of radius δ centered at v. We may assume that δ > 0 is so small that the disks D v , v ∈ V (T ), are pairwise disjoint, and each D v intersects only the edges of T incident to v. Then the edges of T incident to v partition D v into deg(v) sectors. If deg(v) ≥ 3, at most one of the sectors subtends a flat angle (that is, an angle equal to π). If deg(v) ≤ 2, none of the sectors subtends a flat angle by assumption. Conversely, if one of the sectors subtends a flat angle, then the two incident edges are collinear; they are part of the same segment (by the minimality of M), and hence v is a fork of multiplicity 1. In every sector that does not subtend a flat angle, choose a point in D v on the angle bisector. By connecting these points in counterclockwise order along T , we obtain a simple polygon P that contains T . Note that P lies in the δ-neighborhood of T , so area(P ) is less than the area of the δ-neighborhood of T . The δ-neighborhood of a line segment of length has area 2 δ + πδ 2 . The δ-neighborhood of T is the union of the δ-neighborhoods of its segments. Consequently, if L is the addition of the lengths of all segments in M, then the area of the δ-neighborhood of T is bounded above by 2Lδ + sπδ 2 , which is less than ε if δ > 0 is sufficiently small. It remains to show that P has 2s + t vertices; that is, the total number of sectors whose angle is not flat is precisely 2s + t. We define a perfect matching between the vertices of P and the set of segment endpoints and forks (with multiplicity) in each disk D v independently for every vertex v of T . If v is not a fork, then D v contains deg(v) vertices of P and deg(v) segment endpoints. If v is a fork of multiplicity 1, then D v contains deg(v)−1 vertices of P and deg(v)−2 segment endpoints. Finally, if v is a fork of multiplicity 2, then D v contains deg(v) vertices of P and deg(v) − 2 segment endpoints. In all cases, there is a one-to-one correspondence between the vertices in P lying in D v and the segment endpoints and forks (with multiplicity) in D v . Consequently, the number of vertices in P equals the sum of the multiplicities of all forks plus the number of segment endpoints, which is 2s + t, as required. Next, we establish a relation between point sets and covering trees. Lemma 2. Let S be a finite set of points in the plane, not all on a line. Then there exists an ε > 0 such that if S is contained in a simple polygon P with m vertices and area(P ) ≤ ε, then S admits a noncrossing covering tree T and a partition of the edges into pairwise noncrossing segments such that 2s + t ≤ m. Proof. Let m ≥ 3 be an integer such that for every n ∈ N, there exists a simple polygon P n with precisely m vertices such that S ⊂ int(P n ) and area(P n ) ≤ 1 n . The real projective plane P R 2 is a compactification of R 2 . By compactness, the sequence (P n ) n≥3 contains a convergent subsequence of polygons in P R 2 . The limit is a weakly simple polygon P with precisely m vertices (some of which may coincide) such that S ⊂ P and area(P ) = 0. The edges of P form a set of pairwise noncrossing line segments (albeit with possible overlaps) whose union is a connected set that contains S. In particular, the union of the m edges of P forms a noncrossing covering tree T for S. The transitive closure of the overlap relation between the edges of P is an equivalence relation; the union of each equivalence class is a line segment. These segments are pairwise noncrossing (since the edges of P are pairwise noncrossing), and yield a covering of T with a set M of pairwise nonoverlapping and noncrossing segments. Analogously to the proof of Lemma 1, at each vertex v of T , there is a one-to-one correspondence between the vertices in P located at v and the segment endpoints and forks (with multiplicity) located at v. This implies 2s + t = m with respect to M. An immediate consequence of Lemma 2 is a lower bound on the size of simple polygons with arbitrarily small area that enclose a point set S. Corollary 3. Let S be a finite set of points in the plane, not all on a line, and let T be a noncrossing covering tree for S minimizing 2s + t = m . Then there exists an ε > 0 such that if S is contained in a simple polygon P with m vertices and area(P ) ≤ ε, then m ≤ m. Proof. By Lemma 2, there exists an ε > 0 such that if a simple polygon P with m vertices and area(P ) ≤ ε contains S, then P also contains a noncrossing covering tree for S. Therefore, by the minimality of T , necessarily m ≤ m. A similar lower bound can be established for perfect rainbow polygons. In particular, for every set S of k points in the plane one can build a (k + 1)-colored point set S, such that finding a noncrosing covering tree for S minimizing 2s + t is equivalent to finding a minimum perfect rainbow polygon for S. Theorem 4. Let S be a set of k points in general position in the plane, and let T be a noncrossing covering tree for S minimizing 2s + t = m . Then there exists a (k + 1)-colored point set S such that every perfect rainbow polygon for S has at least m vertices. Proof. Note that m ≤ 2k since a star centered at one of the points of S is a covering tree for S with k − 1 segments and no forks. By Lemma 2, there is an ε > 0 such that if S is contained in a simple polygon P with m vertices and area(P ) ≤ ε, then S admits a noncrossing covering tree and a partition of its edges into segments such that 2s + t ≤ m. We construct a (k + 1)-colored point set S from the points in S by adding a dense point set S k+1 . Each point of S has a unique color and all points in S k+1 have the same color. Specifically, S k+1 is the union of two disjoint ε/(2k)-nets for the range space of triangles [32]; that is, every triangle of area ε/(2k) or more contains at least two points in S k+1 . Now suppose, for the sake of contradiction, that there exists a perfect rainbow polygon P for S with x vertices where x < m . Triangulate P arbitrarily into x − 2 triangles. The area of the largest triangle is at least area(P )/(x − 2). Since this triangle contains at most one point from S k+1 , we have area(P )/(x − 2) ≤ ε/(2k), and so area(P ) ≤ ε. By the choice of ε, S admits a noncrossing covering tree and a partition of its edges into segments such that 2s + t ≤ x. This contradicts the minimality of T , which completes the proof. We conjecture that both problems, finding a noncrossing covering tree minimizing 2s + t for a given point set and finding a minimum perfect rainbow polygon for a given colored point set, are NP-complete. Many geometric variants of the classical set cover are known to be NP-hard. For example covering a finite set of points by the minimum number of lines is APXhard [13,28,31], see also [17,27]. The minimum-link covering problem (finding a covering path for a set of points with the smallest number of segments) is NP-complete [5]. However, in these problems, the covering objects (lines or edges) may cross. There are few results on covering points with noncrossing segments. It is known, for example, that it is NP-hard to find a maximum noncrossing matching in certain geometric graph [3]. The problem of, given an even number of points, finding a noncrossing matching that minimizes the length of the longest edge is also known to be NP-hard [2]. 3 Rainbow indexes of k = 3, 4, 5, 6, 7 This section is devoted to determining the rainbow indexes rb-index(k) up to k = 7. The following theorem is the main result of this section, and it summarizes the results proven below. Our proof for Theorem 5 relies on the following lemma (Lemma 6), which may be of independent interest. Lemma 6 guarantees the existence of a strip containing at least one point of each color, with the additional property that there are at least two color classes that have only one point in the strip. Before proving the lemma, we introduce some notation. The line segment connecting two points x and y in the plane will be denoted by xy (or yx). Further, a ray emanating from x and passing through y is denoted by − → xy. Given two parallel lines 1 and 2 , defining a strip ST , we denote by ST the closure of the strip; that is, the set of points in the interior of the strip or on the lines 1 and 2 . Let A = conv(S 1 ) and B = conv(S − 3 ). We describe a sweepline algorithm in which we maintain a strip ST between two parallel lines 1 and 2 . Initially, 1 = L and 2 = U are horizontal lines. We also maintain the invariants that During the algorithm 1 rotates clockwise about the point in 1 ∩S 1 , and 2 rotates clockwise about the point 2 ∩ S − 3 , which are called the pivot points of 1 and 2 , respectively. Using a fully dynamic convex hull data structure [22], we maintain the convex hull of the points of S \ (A ∪ B) in ST , above 1 , and below 2 , respectively. By computing tangent lines from the two pivot points to the three convex hulls, we can maintain an event queue of when the next point in S \ (A ∪ B) enters or exits the strip ST (it is deleted from one convex hull and inserted into another) and when the pivot 1 ∩ S 1 or 2 ∩ S − 3 must be updated. We also maintain the Notice that if k = 3, then the strip defined by 1 and 2 in Lemma 6 is empty, so the triangle xyz is empty. As a consequence, Lemma 6 provides an alternative proof for rb-index(3) = 3. x y z (a) In the remainder of this section, we refer to colors 1, 2, 3, 4, 5, 6, and 7 (if they exist) as red, blue, green, yellow, pink, orange, and black, respectively (e.g., a 4-colored point set will be red, blue, green, and yellow). Furthermore, when applying Lemma 6, we may assume withot loss of generality that the colors i 1 , i 2 , and i 3 are red, blue, and green, respectively, the lines 1 and 2 are horizontal, the point x is to the left of point y on 1 , and if 2 passes through another point w of S that is not green, then w is yellow. In addition, if p is the intersection point between a ray − → zu and a line , then p will denote a point infinitesimally close to p on the ray − → zu towards z; see Figure 3c. Proof. We first show that rb-index(4) ≥ 4. Consider the 4-colored point set in Figure 3a, where S 1 = {x}, S 2 = {y}, S 3 = {z}, and S 4 consists of two points in the interior of the triangle xyz. Every triangle that contains a point of color 1, 2, and 3 must contain xyz, hence two points of S 4 . It follows that there exists no perfect rainbow triangle. We now show that rb-index(4) ≤ 4. Let S = S 1 ∪ S 2 ∪ S 3 ∪ S 4 be a point set in the plane whose points are colored red, blue, green, and yellow. By Lemma 6, there is a strip defined by two horizontal lines, 1 and 2 , where 1 passes through a red point x and a blue point y, and 2 passes through a green point z, such that either there are only yellow points in the interior of the strip, or the strip is empty and 2 passes through a yellow point w. In the first case, we rotate the horizontal ray emanating from z clockwise until it encounters a yellow point u in the interior of the strip; see Figure 3b. Let p be the intersection point of − → zu and 1 . By symmetry, we may assume that p is to the left of x or on the segment xy. If p is to the left of x, then py ∪pz is a covering tree T for {x, y, z, u}, which can be thickened to a perfect rainbow quadrilateral by Lemma 1; see Figure 3b. If p is on xy, then yxzp is a perfect rainbow quadrilateral; see Figure 3c. Finally, if the strip is empty and 2 contains a yellow point w, then xyzw is a perfect rainbow quadrilateral. Before moving to the next proposition, let us prove the following useful lemma that in fact works for monochromatic points. Proof. We first show that we can label the three points in P by a, b, c so that xya, yzb, and zxc are interior-disjoint. By symmetry, we may assume that the line passing through a and c intersects edges xy and xz of xyz, and a is closer to xy than c. Let p = − → ya ∩ − → zc. Since We say that the triangle rst described in Lemma 8 is expedient with respect to {a, b, c}. Note that an expedient triangle can be computed in O(1) time. Further, note that the labeling given in Lemma 8 is not unique, and thus expedient triangles are not uniquely determined. Using Lemma 8, one can find perfect rainbow hexagons for six colors in some special colored point sets, as the following lemma shows. (Figure 5c), then r s t is an expedient triangle, where d lies on xr , a lies on ys , and b lies on zt . In all three cases, r s t ⊂ rst, as required. Since r s t ⊂ rst, and since d ∈ rst and d ∈ r s t , it follows that r s t contains fewer points of S than rst. Hence we can repeat this procedure until we find an expedient triangle that is empty of points of S. From this expedient triangle, we can obtain a perfect rainbow hexagon for the six colors involved, by slightly moving the vertices of the expedient triangle towards the vertices of xyz as depicted in Figure 5d. Furthermore, an empty expedient triangle can be computed in O(n) time: We can start with an arbitrary expedient triangle rst. For each point s ∈ S, we can test whether s ∈ rst and update it to a smaller triangle r s t ⊂ rst if necessary in O(1) time. Consequently, a perfect rainbow polygon for six of the colors can also be found in O(n) time. We are now ready to prove the following result. The set S(5) consists of four one-element color classes S 1 = {x}, S 2 = {y}, S 3 = {z}, and s 4 = {w}, where w is in the interior of xyz. The set S 5 of black points contains xyz in its convex hull, as described in the proof of Theorem 4; that is, every triangle of area ε or more contains at least two black points; see Figure 6a. The set S(6) is obtained from S(5) by adding an one-element color class S 6 = {u}, where u is in the interior of xyz; see Figure 6b. The set S(7) is obtained from S(6) by adding a one-element color class S 7 = {v}, where v is in the interior of xyz, and the vertices u, v, and w are positioned such that − → xu crosses zw, − → zw crosses yv, and − → yv crosses xu; see Figure 6c. It is easy to see that a noncrossing covering tree for {x, y, z, w} in S(5), minimizing 2s + t, requires either two segments and a fork, or at least three segments (and no fork). Hence, by Theorem 4, the size of a minimum perfect polygon for S(5) is at least 5. Figure 6a illustrates a perfect rainbow pentagon based on a covering tree that uses a segment to cover x and z and another segment to cover y and w. Every noncrossing covering tree for {x, y, z, w, u} in S(6) requires at least three segments, so the size of any perfect rainbow polygon for S(6) is at least 6 by Theorem 4. Figure 6b shows a perfect rainbow hexagon based on three segments that cover x and u, y and w, and z, respectively. Finally, consider a noncrossing covering tree for {x, y, z, w, u, v} in S (7). It has at least three segments, by the pigeonhole principle, since no three points are collinear. If it has four Figure 6: 5-, 6-, and 7-colored points sets whose rainbow indices are 5, 6, and 8, respectively. or more segments, then the size of the corresponding perfect rainbow polygon for S(7) is at least 8. Otherwise it consists of exactly three segments, and then an analysis of the possible choices shows that at least two forks are always required. Therefore, the size of a minimum perfect rainbow polygon for S(7) is at least 8. Figure 6c illustrates the perfect rainbow octagon for S(7) based on the segments that cover {x, u}, {y, v}, and {z, w}, respectively. We next show that rb-index(5) ≤ 5, rb-index(6) ≤ 6, and rb-index(7) ≤ 8. Let S be a k-colored point set in the plane, with k ∈ {5, 6, 7}. By Lemma 6, there is a strip defined by two horizontal lines 1 and 2 , with 1 passing through a red point x and a blue point y, and The strip ST contains points of all k − 3 other colors. Consider the horizontal ray emanating from z to the left and rotate it in clockwise direction, sweeping all the colored points in the strip until we find two consecutive points of S, say u and v, with different colors, say yellow and pink; see Figure 7a. Let p and q be the intersection points of − → zu and − → zv with 1 , respectively. Assume that p is to the left of x. If q is also to the left of x or on the line segment xy, then ypzq is a perfect rainbow quadrilateral for five of the colors; see Figure 7a. If q is to the right of y, then pqz is a perfect rainbow triangle for five of the colors; see Figure 7b. If k = 5, we are done. If k = 6, then we connect an orange point to z and thicken this edge (dotted line segments in Figures 7a and 7b). In this way, we obtain a perfect rainbow hexagon in the first case and a perfect rainbow pentagon in the second case. If k = 7, we repeat this process connecting a black point to z, to obtain either a perfect rainbow octagon or a perfect rainbow heptagon. Suppose now that p is to the right of x. Arguing in an analogous way when rotating the horizontal ray emanating from z to the right counterclockwise, if u and v are two consecutive points with different colors, then we may assume that the intersection point p 1 between 1 and − → zu is to the left of y. When this happens, p to the right of x and p 1 to the left of y, it is straightforward to see that xyz must contain at least one point of each color, and that q is on xy. If k = 5, then yxp zq is a perfect rainbow pentagon; see Figure 7c. If k = 6, a perfect rainbow hexagon exists by Lemma 9; see Figure 5d. If k = 7, we can build a perfect rainbow hexagon for six of the colors by Lemma 9, and form a perfect rainbow octagon for S by connecting a black point to z and thicken this edge. zw. In the first case, yxpwp is a perfect rainbow pentagon; see Figure 7d. In the second case yxzwp is a perfect rainbow pentagon; see Figure 7e. When k = 6 or k = 7, the strip contains points of at least two of the colors and we can argue as before, but now rotating clockwise about w instead of about z, to look for the first two consecutive points u and v with different colors, say pink and orange. If the intersection point p between − → wu and 1 is to the left of x, then we can build a perfect rainbow quadrilateral or a perfect rainbow triangle for five of the colors, as shown in Figures 7a and 7b. After that, we connect z (and a black point if k = 7) to w to form a perfect rainbow hexagon (or a perfect octagon if k = 7) for S; see Figure 7f. Finally, if p is to the right of x, then there are points of at least two of the colors to the left of − → xw. Therefore, when rotating the horizontal ray emanating from x to the right clockwise until finding two consecutive points u and v of different colors, the intersection point between − → xu and 2 will be necessarily to the right of w, and we can carry out symmetric constructions. The following corollary is straightforward from the proofs of Propositions 7 and 10. Corollary 11. For k = 3, 4, 5, 6, 7, a perfect rainbow polygon with at most rb-index(k) vertices can be found in O(n log n) time for any k-colored set S of n points. Proof. By Lemma 6, the strip ST used in the proofs of Propositions 7 and 10 can be found in O(n log n) time. In addition, the cyclic order of the points in S around any of x, y, z or w can be computed in O(n log n) time. A perfect rainbow hexagon as described in Lemma 9 can be obtained in O(n) time. Therefore, the corollary follows. Upper bound for rainbow indexes We show in this section that for every k-colored point set, there exists a perfect rainbow polygon of size at most 10 k 7 + 11. We begin with an auxiliary lemma showing that any seven (monochromatic) points in a vertical strip can be covered by a noncrossing forest of two trees of order four and two, respectively, such that both trees are fully contained in the strip. (ii) For i ∈ {1, 2}, the tree T i has a leaf v i such that the ray emanating from v i in the direction opposite to the edge incident to v i does not cross T i . Moreover, if the extension at v i hits T j , j = i, then the extension at v j does not hit T i ; that is, the two trees and the two extensions do not create cycles. Proof. Let be the line passing through p 1 and p 7 . Without loss of generality, we may assume that S contains at least 5/2 = 3 points below . Note that p 1 and p 7 are extremal points in S, hence they are vertices of the convex hull conv(S) of S. Points p 1 and p 7 decompose the boundary of conv(S) into two convex arcs, an upper arc and a lower arc. Since S contains at least 3 points below , the lower arc must have at least 3 vertices (including p 1 and p 7 ). We distinguish between two cases depending on the number of vertices of the lower arc of conv(S). The lower arc of conv(S) has 3 vertices. Assume that the lower arc of conv(S) is the path (p 1 , p i , p 7 ), where 1 < i < 7; see Figures 8a-8b. Since S contains at least 3 points below , at least 2 points of S are in the interior of p 1 p i p 7 . Rotate the ray − − → p 1 p i counterclockwise until it encounters a point p a in the interior of p 1 p i p 7 ; rotate the ray − − → p 7 p i clockwise until it encounters a point p b in the interior of p 1 p i p 7 . We distinguish between two cases depending on whether p a and p b are distinct. In the first case, assume that p a = p b ; see Figure 8a. The rays − − → p 1 p a and − − → p 7 p b intersect in the interior of p 1 p i p 7 , at some point q. By construction, the remaining two points in S \ {p 1 , p i , p 7 , p a , p b } are above the path (p 1 , q, p 7 ), in a wedge bounded by − → qp 1 and − → qp 7 . This wedge is convex, hence it contains the line segment between the two points. Let T 1 be the star centered at q with edges p 1 q, p i q, and p 7 q; and let T 2 be the line segment spanned by the two points of S above (p 1 , q, p 7 ). In the second case, assume that p a = p b ; see Figure 8b. Let r be the first point along the ray − − → p i p a such that the line segment p 1 r or p 7 r contains a point of S in the interior of p 1 p a p 7 . Denote this point by p c ∈ S. By construction, the remaining two points in S \ {p 1 , p i , p 7 , p a , p c } are above the path (p 1 , r, p 7 ), in a convex wedge bounded by − → rp 1 and − → rp 7 . Let T 1 be the star centered at r with edges p 1 r, p i r, and p 7 r, and let T 2 be the line segment spanned by the two points of S above (p 1 , r, p 7 ). In both cases, T 1 ∪ T 2 covers all seven points in S, is in B, and is noncrossing, as required. Moreover, the second property of the lemma is clearly satisfied by choosing v 1 = p 1 and v 2 the leftmost point of T 2 . The lower arc of conv(S) has 4 or more vertices. Let p 1 , p i , p j , p k be the first four vertices of the lower arc of conv(S) in counterclockwise order (possibly p k = p 7 ). Let q be the intersection point of the lines passing through p 1 and p i , and through p j and p k , respectively. Rotate the ray − − → p k p j clockwise until it encounters a point in S \ {p 1 , p i , p j , p k }, and denote it by p a , Let T 1 be the union of the path (p 1 , q, p k , p a ), and connect the two remaining points of S to define T 2 ; see Figure 8c. Note that T 1 ∪ T 2 covers all seven points in S, is in B, and is noncrossing, as required. Moreover, the second property of the lemma is clearly satisfied by choosing v 1 = p 1 and v 2 the leftmost point of T 2 . Using Lemma 12, the following theorem provides a method to find noncrossing covering trees with few segments and forks. Theorem 13. Let S be a finite set of n = 7j + r points in the plane in general position, with j ≥ 0 and 0 ≤ r ≤ 6. Then, in O(n log n) time, we can construct a noncrossing covering tree T consisting of 4 n 7 + 1 + r 2 segments and 2 n 7 + r 2 forks with multiplicity 1. Proof. By rotating the point set if necessary, we may assume that the points in S have distinct x-coordinates. We assume first that n = 7j, for some integer j > 0. Figure 9 illustrates the method to obtain a noncrossing covering tree with 4 n 7 + 1 segments and 2 n 7 forks with multiplicity 1. We partition the n points from left to right into j groups G 1 , G 2 , . . . , G j of seven points each; see Figure 9a. We apply Lemma 12 to every group G i to cover the points in G i by two trees, consisting of 4 segments in total; see Figure 9b. In this way, we obtain a forest F formed by 2j trees with 4j segments. In addition, by the same lemma, every tree T i of F contains a special leaf v i that can be extended to the left without crossing T i . We add a long vertical segment P to the left of the point set such that the extension of any tree T i of F at v i crosses P ; see Figure 9c. For every tree T i , we extend the edge incident to its special leaf v i to the left until the extension hits another tree, another extension, or P ; see Figure 9c. This is carried out exploring, for example, the special leaves from right to left. Thus, we join the 2j trees of F and the segment P to form an single component. This component is necessarily a noncrossing covering tree T with 4j + 1 segments and 2j forks with multiplicity 1, since all extensions go to the left without creating cycles. Therefore, there exists a noncrossing covering tree T consisting of 4 n 7 + 1 segments and 2 n 7 forks with multiplicity 1. Consider the case that n = 7j + r, where 1 ≤ r ≤ 6. Using the first 7j points from left to right, we proceed as before, and we build a noncrossing covering tree T with 4j + 1 segments and 2j forks with multiplicity 1. If j = 0, the previous step is not required. The last r points can be covered by connecting the point at position 7j + 1 to the following one, the point at position 7j + 3 to the following one, and so on. If the last point cannot be connected to the following one, we assign an small horizontal segment to it. In this way, we are covering the last r points with r 2 segments. These segments can be joined to T by extending their leftmost points. Therefore, we can obtain a noncrossing covering tree T consisting of 4 n 7 + 1 + r 2 segments and 2 n 7 + r 2 forks with multiplicity 1. It remains to show that the construction above can be implemented in O(n log n) time. We can sort the points in Notice that by construction, the minimum number of pairwise noncrossing segments into which T can be decomposed is precisely 4 n 7 + 1 + r 2 . As a consequence of this theorem, we can give an upper bound for the size of a perfect rainbow polygon. Theorem 14. Let S be a k-colored set of n points in general position. Then a perfect rainbow polygon P of size at most 10 k 7 + 11 can be computed in O(n log n) time. Proof. We choose a point of each color to define a point set S of cardinality k = 7j + r, with j ≥ 0 and 0 ≤ r ≤ 6. By Theorem 13, there is a noncrossing covering tree T for the point set S , consisting of 4 k 7 + 1 + r 2 segments and 2 k 7 + r 2 forks with multiplicity 1, and it can be computed in O(k log k) time. By Lemma 1, given a noncrossing covering tree T and a partition M of the edges into the minimum number s of pairwise noncrossing segments, for every ε > 0, there exists a simple polygon P with 2s + t vertices such that area(P ) ≤ ε and T lies in P , where t is the sum of the multiplicities of all forks in T . Thus, for every ε > 0, we can construct a simple polygon P with 2(4 k 7 + 1 + r 2 ) + 2 k 7 + r 2 ≤ 10 k 7 + 11 vertices such that area(P ) ≤ ε and S lies in P . By choosing ε sufficiently small so that P contains no other point in S except for the points in S , we can construct a perfect rainbow polygon for S of size at most 10 k 7 + 11. A suitable ε > 0 can be half of the minimum distance between the covering tree T and the points in S \ S . To find this distance, we can compute the Voronoi diagram for a set of sites, which consists of the O(k) ≤ O(n) edges of T and the O(n) points in S \ S in O(n log n) time [14,Sec. 7.3]. The Voronoi diagram is formed by O(n) line segments and parabolic arcs; and we can find the closest point in T (hence in S \ S ) for each of these arcs in O(1) time. Lower bound for rainbow indexes For every k ≥ 3, Dumitrescu et al. [16] constructed a set S of n = 2k points in the plane (without colors) such that every noncrossing covering path has at least (5n−4)/9 edges. They also showed that every noncrossing covering tree for S has at least (9n − 4)/17 edges. Furthermore, every set of n ≥ 5 points in general position in the plane admits a noncrossing covering tree with at most n/2 noncrossing segments, and this bound is the best possible. We recall that a segment is defined as a chain of collinear edges. In this section, we use the point sets constructed in [16] to derive a lower bound for the complexity of a covering tree as defined in Section 2. This bound, in turn, yields a lower bound on the complexity of perfect rainbow polygons for colored point sets built from such sets. Construction. We use the point set constructed by Dumitrescu et al. [16]. We review some of its properties here. For every k ∈ N, they construct a set of n = 2k points, S = {a i , b i : i = 1, . . . , k}. The pairs {a i , b i } (i = 1, . . . , k) are called twins. The points a i (i = 1, . . . , k) lie on the parabola α = {(x, y) : y = x 2 }, sorted by increasing x-coordinate. The points b i (i = 1, . . . , k) lie on a convex curve β above α, such that dist(a i , b i ) < ε for a sufficiently small ε, and the lines a i b i are almost vertical with monotonically decreasing positive slopes (hence the supporting lines of any two twins intersect below α). For i = 1, . . . , k, they also define pairwise disjoint disks D i (ε) of radius ε centered at a i such that b i ∈ D i (ε), and the supporting lines of segments a i a j and b i b j meet in D i (ε) for every j, i < j ≤ k. Furthermore, (1) no three points in S are collinear; (2) no two lines determined by the points in S are parallel; and (3) no three lines determined by disjoint pairs of points in S are concurrent. Finally, the x-coordinates of a i (i = 1, . . . , k) are chosen such that (4) for any four points c 1 , c 2 , c 3 , c 4 from S, labeled by increasing x-coordinate, the supporting lines of c 1 c 4 and c 2 c 3 cross to the left of these points. See Figure 10 for a sketch of the construction. Analysis. Let S be a set of n = 2k points defined in [16] as described above, for some k > 1. Let M be a set of pairwise noncrossing line segments in the plane whose union is connected and contains S. In particular, if T is a noncrossing covering tree for S, then any partition of the edges of T into pairwise noncrossing segments could be taken to be M. A segment in M is called perfect if it contains two points in S; otherwise it is imperfect. By perturbing the endpoints of the segments in M, if necessary, we may assume that every point in S lies in the relative interior of a segment in M. By the construction of S, no three perfect segments are concurrent, so we can define the set Γ of maximal chains of perfect segments; we call these perfect chains. Dumitrescu et al. [16, proved several properties of a covering path for S. Clearly, a covering path has precisely two leaves, while a covering tree may have arbitrarily Proof. Suppose, for the sake of contradiction, that γ 1 , γ 2 ∈ Γ x have a common right endpoint q. Let pq and rq, respectively, be the rightmost segments of γ 1 and γ 2 . If pq contains a twin, then pq has positive slope (by construction), and so q is the upper endpoint of pq. In this case segment rq is imperfect by Lemma 16, contradicting the assumption that rq is in γ 2 . We may assume that neither pq not rq contains a twin. In this case, their supporting lines intersect to the left of the points in S on pq and pr by property (4), contradicting our assumption that q is the right endpoint of both segments. Corollary 18. Every chain in Γ consists of at most two chains in Γ x . Denote by s 0 , s 1 and s 2 , respectively, the number of segments in M that contain 0, 1, and 2 points from S. An adaptation of a charging scheme from [16,Lemma 4] yields the following result, where t is the number of forks (with multiplicity) in M. Proof. Let pq be a perfect segment of M, and part of a chain γ ∈ Γ. We charge pq to either an endpoint of γ or some imperfect segment. We define the charging as follows. If pq contains a twin, then charge pq to the top vertex of pq, which is the endpoint of a perfect chain by Lemma 16. Assume now that pq does not contain a twin, its left endpoint is p, and it contains a point from each of the twins {a i , b i } and {a j , b j }, with i < j. We consider the four cases presented in Lemma 15. In Case 1, charge pq to p, which is the endpoint of a perfect chain. In Case 2, charge pq to the imperfect segment s containing a point of the twin {a i , b i }. In Case 3, charge pq to the endpoint v of a perfect chain located in D i (ε). Now, consider Case 4 of Lemma 15. In this case, pq is the leftmost segment of a maximal x-monotone chain γ x . We charge pq to the right endpoint of γ x , which is the endpoint of a perfect chain. This completes the definition of the charges. Note that every imperfect segment and every right endpoint of a chain in Γ is charged at most once for perfect segments in Cases 1-3, and every left endpoint of a chain is charged at most twice. By Corollary 18, each endpoint of a perfect chain is charged at most once for perfect segments in Case 4. Overall, every imperfect segment containing one point of S is charged at most once, and every endpoint of a perfect chain is charged at most twice. Consequently, We bound |Γ| from above in terms of s 0 , s 2 , and t. Choose an arbitrary root vertex in T , and direct all edges in T towards the root. Every perfect chain has a unique vertex v closest to the root. As all chains in Γ are maximal and as no three prefect segments are concurrent, v must be a fork, the endpoint of an imperfect segment, or the root. This yields |Γ| ≤ 2(s 0 + s 1 ) + t + 1. Combined with (2), this yields, s 2 ≤ s 1 + 4[2(s 0 + s 1 ) + t + 1] = 8s 0 + 9s 1 + 4(t + 1), as claimed. Theorem 21. For every integer k ≥ 5, there exists a finite set of k-colored points in the plane such that every perfect rainbow polygon has at least 40 (k−1)/2 −8 Proof. Assume first that k is odd, and let S be the set of k − 1 = 2j ≥ 4 points from [16]. If T is a noncrossing covering tree for S minimizing 2s + t = m, then by Theorem 4, there exists a k-colored point set S built from S such that every perfect rainbow polygon for S has at least m vertices. By Lemma 20, every noncrossing covering tree of S satisfies 2s + t ≥ 20(k−1)−8 19 , hence every perfect rainbow polygon for S has at least 20(k−1)−8 19 = 40 (k−1)/2 −8 19 vertices. Assume now that k is even. From the (k − 1)-colored point set S built previously, we can obtain a k-colored point set S by adding a new point with a different color. Since every perfect rainbow polygon for S has at least 20(k−2)−8 19 = 40 (k−1)/2 −8 19 vertices, then every perfect rainbow polygon for S also has at least 40 (k−1)/2 −8 19 vertices. Conclusions In this paper, we studied the perfect rainbow polygon problem and we proved that the rainbow index of k satisfies 40 (k−1)/2 −8 19 ≤ rb-index(k) ≤ 10 k 7 + 11, for k ≥ 5. We also showed that k = 7 is the first value such that rb-index(k) = k. Our bounds are based on the equivalence between perfect rainbow polygons and noncrossing covering trees. Several open questions arise in relation to this problem. For instance, we conjecture that given a colored point set S, finding a minimum perfect rainbow polygon for S is NP-complete. Another interesting question is to close the gap between the lower and upper bounds on the rainbow index.
13,095.6
2020-07-20T00:00:00.000
[ "Mathematics" ]
A biophysical minimal model to investigate age-related changes in CA1 pyramidal cell electrical activity Aging is a physiological process that is still poorly understood, especially with respect to effects on the brain. There are open questions about aging that are difficult to answer with an experimental approach. Underlying challenges include the difficulty of recording in vivo single cell and network activity simultaneously with submillisecond resolution, and brain compensatory mechanisms triggered by genetic, pharmacologic, or behavioral manipulations. Mathematical modeling can help address some of these questions by allowing us to fix parameters that cannot be controlled experimentally and investigate neural activity under different conditions. We present a biophysical minimal model of CA1 pyramidal cells (PCs) based on general expressions for transmembrane ion transport derived from thermodynamical principles. The model allows directly varying the contribution of ion channels by changing their number. By analyzing the dynamics of the model, we find parameter ranges that reproduce the variability in electrical activity seen in PCs. In addition, increasing the L-type Ca2+ channel expression in the model reproduces age-related changes in electrical activity that are qualitatively and quantitatively similar to those observed in PCs from aged animals. We also make predictions about age-related changes in PC bursting activity that, to our knowledge, have not been reported previously. We conclude that the model’s biophysical nature, flexibility, and computational simplicity make it a potentially powerful complement to experimental studies of aging. Reviewers' Comments to the Author Reviewer #1: The model presented in this study is impressive, particularly given its simplicity: with just three variables (though many constants), with only one parameter to reproduce the age-related differences.It effectively reproduces the differences between young and old CA1 pyramidal cells in the hippocampus, such as adaptive firing, stimulus-induced bursting, and spontaneous bursting.I thoroughly enjoyed reading it and congratulate the authors on their impressive results, along with the well-done Jupyter notebook that complements the paper excellently.I encourage the authors to continue their work by exploring bifurcation analysis and conducting a more robust exploration of the model's parameter limitations compared to experimental values.I hope my comments can help to improve this paper. We thank the reviewer for all their positive feedback, and suggestions for improving this paper.We also thank the reviewer for their encouragement regarding future work. Major comments: 1) Abstract: The problem is well-introduced, and the rationale for the study is clear.However, please add some statements at the end about the conclusions of your research and its relevance to the field (what's the novelty?). We added two closing sentences to the abstract, which talk about our novel results and summarize our conclusions regarding the potential power of this model in complementing experimental studies of aging (marked in blue; no line numbers). 2) In the introduction you pose some questions [3][4][5][6][7][8] that are not fully answered by the paper.2.1) I'd consider revisiting this questions in the discussion together with a mention on how this work could help in the field of AD or Parkinson's, or other age-related pathologies. We've added text to the Discussion section on 'Cellular heterogeneity', which we think better links the proposed experiments we had there previously to some of the questions we pose in the Introduction regarding normal neurophysiological aging and its stages (lines 362-367 and lines 370-373).We also added some text therein and an additional reference regarding how this model might be used to help understand aspects of Alzheimer's disease (lines 366-367, ref. 93). 2.2) Can this be used to model the impairment of plastic mechanisms? The model in its current formulation cannot be used to study plasticity, since it does not include equations to model synaptic input or neurotransmitter release.However, the model could be extended to do this, which we would like to explore in future work. We added language to the Methods section to clarify that this present formulation is different from our previous formulations in that it includes calcium dynamics (i.e. is three-dimensional rather than two-dimensional), and is also specially tuned with parameter values taken from experimental data from hippocampal PCs (lines 55-59).4) Methods Section 2.1: Please provide a brief explanation of what I_{F} and I_{CaL} are when explaining Equation 1. We've added these brief explanations (lines 72-75).5) Clarification on s_x and N_x: The values of s_x are unclear for non-voltage-gated channels (NaT adn DK?), together with the values of N_x.Can you consider adding this information to the Table 1?Is there supporting literature from the values you choose? Regarding the values of s_x, we write, "s_x is ∼1 pA for most voltage-gated channels [38], and is ∼5-10 pA for SK channels" (lines 90-91), and we cite two references from the literature that support these values (refs. 38, 39).The number of channels in the membrane, N_x, for each of these channels is not well known in CA1 PCs, to the best of our knowledge.We do not enter values for s_x or N_x directly as parameters in the model, which is why they are not included in the parameter table (Table 2).If these are well known in other cells, they could be entered, but given that we do not have good numbers for the second, what we instead enter into the model as a parameter is the amplitude, a_x, which is the product of s_x and N_x.These amplitudes are chosen to produce currents of the same amplitude as seen in experimental recordings, and the references from the literature supporting those values are included in Table 2. 6) Consider adding the figures of the Jupyter notebook in an Appendix/Supporting Material, so that it is easier to reference.They will help understanding the model functions and variables, contributions of I_x to V... We've added all the additional figures generated in the Jupyter notebook to the manuscript as Supporting Information (Figs S1-S11). 7) Recent paper of potential interest to add to the paper (bursting patterns aged vs. young, line 168-169): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10926450/ also consider mentioning it in the intro/discussion.We had missed this paper, thank you for flagging.We've made a small change to the Study Design section (what was lines 168-169 and is now lines 211-213), adding the word 'their' to specify that here we mean we are not aware of any studies that look at the bursting patterns of CA1 PCs in young and aged animals.From reading the above paper, we see that they did look at bursting in VTA cells but not in hippocampal cells, where they were more focused on synaptic plasticity.However, the comparison of young and aged VTA cells is interesting and relevant, and so we added a brief description of this study's results to the Discussion section (lines 446-452). 8) General comments to the figures: 8.1 There are no captions (?); Our apologies for this omission.We corrected this and there are now captions for all figures. Please add the legend (yPC, aPC) + see next comment We've added legends to all figures.9) Figure 1 (and methods 3.1).9.1 (top panel) Not clear how do you compute the frequency (e.g., 60Hz vs. 40Hz).I think I'd be helpful to have in the figure a visual separation between the two segments you are referring to (first 100ms + the rest), plus a bar plot (for example) that shows quantitatively the differences between young and aged PCs. The frequency is calculated manually by simply counting the number of spikes in the given time window, e.g in the initial 100 ms we see 6 vs. 4 spikes, which comes out to 60 Hz and 40 Hz, respectively.To aid in comparing the two segments (first 100 ms vs. last 700 ms) we added a supplemental figure in the Jupyter notebook and in Supporting Information (S4 Fig) , and added reference to this figure in this section of the main text (lines 242-244). We think a bar plot might be misleading, since it could imply we have more than one data point (more than one 'subject') for the young and aged PCs.Since this is a deterministic model, running these simulations with the same parameters always produces the same result.9.2 Also, If you don't want to include the figures in the notebook, at least add in the plot when the square pulse finishes and ends and with which amplitude. We've added all the additional figures generated in the Jupyter notebook, including those that show the stimulus traces, to the Supporting Information.We've also added the stimulus traces to the main figures within the manuscript, with the exception of two figures where the stimulus amplitude varies (Fig. 3) or there is no stimulus (Fig. 4).9.3 Again, I'd add the rest of the figures (which are relevant in my opinion) in the appendix (at least). We've added all the additional figures in the Jupyter notebook to the Supporting Information.We've also added references to these supplemental figures in the text, where relevant.9.4 Rephrase the text 197-201 so that it is easier to understand which panel you are referring to. We appreciate the reviewer pointing this out.We realized this was a bit confusing for all figures (not just this section).So, we've redone all figures with A, B, C, etc. labels for each panel and now refer to the panels this way throughout the text (each instance marked in blue).9.5 All above (9.1-9.3) also applies to the other figures (when applicable)!! We've applied these recommendations to all figures, as applicable.10) Parameter tuning: In the methods (and/or results), explain how parameters are tuned to find each firing pattern (I understand it is not only based on literature).Clarify which parameters are changed for each plot. We added language to the Methods section to explain how the parameters were tuned (lines 177-188).Regarding clarifying which parameters are changed, we thank the reviewer for this prompt.Changes to the parameters are now indicated in all the figure captions.11) Figure 2, Bottom Panel: Indicate whether the graph shows saturation or continuous increase.If not changing the plot, mention in the caption that this is not a saturation effect.This is not a saturation effect or a continuous increase, but rather shows the AHP response after a short stimulus, and then that response gradually running down over time after the stimulus ends.We think this is clearer now that we have added the stimulus pulse to the plot, and we thank the reviewer for that suggestion for all figures.12) Figures 5 and 6: Consider merging these figures as they contain overlapping information.Clearly indicate which parameters are changed between plots and the reasons for those changes. We think both of these figures are important, since they show results for different firing regimes and reproduce different electrical phenomena seen in PC recordings.So, we have kept these figures.The parameters used for each plot are indicated by referring to parameters in Figs. 1 and 5.The only LFP parameter changed between the two plots is indicated in the Fig. 6 caption. Minor comments: 31 means → a means Corrected.In the current template, this is now on line 35. There is a problem with the references to the figures, and It was very hard to guess which figure you might be referring to. Our apologies, we had LaTeX compilation errors that meant the figure numbers were not displayed.We have fixed these and all figure references should now be correct. Remove paragraph space between 290-291.This is a new paragraph (now on line 357).
2,608.2
2024-02-28T00:00:00.000
[ "Biology", "Physics", "Medicine" ]
Spectrum Handoff based on Imperfect Channel State Prediction Probabilities with Collision Reduction in Cognitive Radio Ad Hoc Networks The spectrum handoff is highly critical as well as challenging in a cognitive radio ad hoc network (CRAHN) due to lack of coordination among secondary users (SUs), which leads to collisions among the SUs and consequently affects the performance of the network in terms of spectrum utilization and throughput. The target channel selection mechanism as part of handoff process can play an enormously significant role in minimizing the collisions among the SUs and improving the performance of a cognitive radio network (CRN). In this paper, an enhanced target channel selection scheme based on imperfect channel state prediction is proposed for the spectrum handoff among the SUs in a CRAHN. The proposed scheme includes an improved frame structure that increases coordination among the SUs in the ad hoc environment and helps in organizing the SUs according to the shortest job first principle during channel access. Unlike the existing prediction-based spectrum handoff techniques, the proposed scheme takes into account the accuracy of channel state prediction; the SUs affected due to false prediction are compensated by allowing them to contend for channel access within the same transmission cycle and thus enabling them to achieve higher throughput. The proposed scheme has been compared with the contemporary spectrum handoff schemes and the results have demonstrated substantial improvement in throughput and extended data delivery time by virtue of the reduced number of collisions. Introduction Over the last few decades, the demand for wireless spectrum access has grown exponentially. This rise in demand is due to enormous increase in number of end users from healthcare, businesses, financial, defense, internet of things (IoT), etc., having seamless wireless connectivity requirements, running different interactive applications [1]. However, the fixed spectrum allocation policies, which aimed to prevent interference with other users, resulted in spectrum scarcity [2]. Spectrum allocated to primary users (PUs) is mostly underutilized in time and space. According to federal communication commission (FCC), there is a huge variation in the temporal and spatial spectrum utilization, ranging from 15% to 85% [3,4]. This forced the authorities to look for better spectrum management policies. Cognitive radio (CR) is a key technology that realizes the concept of dynamic spectrum access (DSA), in which secondary users (SUs) can access the underutilized portion of the spectrum of a primary network through sensing and channel access mechanisms [5,6]. Quick and accurate spectrum sensing is extremely important in a cognitive radio network (CRN) sensing phase to maximize the pool of resources available to SUs [7,8]. In a CRN, the SUs opportunistically access channels that are sensed to be currently unutilized by the primary users (PUs); however, PUs can preempt the SUs' transmissions any time and, consequently, the SUs have to vacate the channels to avoid interference to the PUs and switch to new idle channels to resume their transmissions. This process of switching to a new idle channel by the SU is called spectrum handoff, which is the primary focus of this work. Spectrum handoff is mainly classified into two main categories: (i) non-channel switching, also known as non-handoff (NHO); and (ii) channel switching spectrum handoff [9]. In NHO, the interrupted SU stays on the same channel and waits for the PU to vacate that channel to resume its unfinished transmission. In this case, the current channel and the target channel is the same. However, in channel switching spectrum handoff the interrupted SU has to vacate the channel and switch to a new idle channel. The process of finding the new idle channel can be either reactive or proactive [10]. In reactive channel switching spectrum handoff, the search for the new idle channel is done in real time on the actual arrival of PU. It provides an accurate list of idle channels as the spectrum sensing is performed in the most relevant environment. However, it increases the handoff delay, which in turn increases the extended data delivery time (EDDT). In the proactive channel switching spectrum handoff, the SU predicts the channel state based on the long-term PU traffic statistics and target channel is selected well in advance. Therefore, when the SU is interrupted, it switches to one of the already selected target channels. This saves spectrum sensing time that incurred in the case of reactive channel switching spectrum handoff. Here, the trade-off lies in prediction accuracy. Prediction accuracy is a function of PU traffic intensity and greatly affects the performance of the network, especially during spectrum handoff. Existing work on spectrum handoff where channel state prediction is used for proactive target channel selection ignores the accuracy of prediction, considering only a perfect prediction mechanism [11][12][13][14][15]. Imperfect channel prediction mechanism has been considered for improving only the spectrum sensing phase in a cognitive cycle [16][17][18][19]. In [20,21], prediction accuracy for improving the spectrum sensing delay as a part of spectrum handoff has been considered; however, this is applicable only in a centralized CRN. The work in this paper differs from the existing work in that the imperfect channel prediction method is being considered for analyzing the spectrum handoff in a multi-user cognitive radio ad hoc network (CRAHN). To the best of our knowledge, this is the first work, which takes into consideration the channel state prediction accuracy and the network coordination among distributed SUs leading to the improvement of collision probability, EDDT and throughput of the system. The coordination among distributed SUs during channel selection and access is achieved by proposing a novel frame structure which provides contention free channel access when the channel state prediction is true; otherwise, it offers a contention-based channel access within the same transmission cycle. Furthermore, the impact of PU traffic intensity on the performance of a CRAHN has also been considered in this work. Contributions The following are the major contributions of this research: • A spectrum handoff scheme based on imperfect channel state prediction is proposed, which aims to reduce the EDDT and improve the average throughput of the system by virtue of the reduced number of collisions among SUs. • An improved frame structure is proposed that aims at providing coordination among distributed SUs and organizing them according to shortest job first (SJF) principle during channel access. • The performance of the proposed spectrum handoff scheme was evaluated through modeling and simulation; a comparison with the existing schemes was also carried out that demonstrates improvement in EDDT, the number of collisions among SUs and average throughput of the SUs in a CRAHN. The rest of the paper is organized as follows. The related work is presented in Section 2 with comparative analysis of existing spectrum handoff techniques. The proposed spectrum handoff scheme, the system model and assumptions are described in detail in Section 3. The results are presented in Section 4, followed by conclusion and future direction in Section 5. Related Work Existing work shows that many efforts have been made in recent years to reduce the number of collisions, lower extended data delivery time and improve the overall throughput of the system during spectrum handoff management process for both centralized and decentralized CRNs. In centralized CRN, a central entity is coordinating the channel selection and access process and helps in reducing unwanted collisions among SUs. However, in a decentralized CRN, also known as CRAHN, due to the distributed nature of the network, SUs have to bear the burden of fair channel selection and access mechanism. Network coordination among distributed SUs is a very challenging task and is achieved using split phase, common hopping sequence and dedicated common control channel (CCC) [22,23]. Split phase and common hopping sequence require tight network synchronization for all the network nodes. The dedicated channel is favorable and outperforms the other two when there are many primary channels; otherwise, it limits the spectral efficiency when number of channels is small [24]. In [25,26], the authors considered common hopping mechanism to find channel rendezvous during spectrum handoff management. In [27,28], a dedicated CCC is considered for coordination among distributed SUs . [29] concluded that, with dedicated CCC, network coordination can be performed simultaneously with data transmission, which enables SUs to achieve higher throughput, thus making use of a dedicated CCC for network coordination in a CRAHN is quite favorable. Target channel selection process as a part of spectrum handoff can be performed reactively or proactively. Reactive target channel selection gives accurate state of the channel due to real-time sensing [30][31][32], whereas proactive channel selection depends on accuracy of channel state prediction that plays important role in overall performance of spectrum handoff [11][12][13][14][15]. Due to fluctuating nature of radio environment and PU activities, the impact of false prediction cannot be ignored. In [30], an analytical model is presented to characterize the affect of multiple interruptions caused by the PUs on the extended data delivery time by taking into consideration the reactive decision spectrum handoff. Coordination among SUs is not considered in this work. The authors of [31] presented an analytical framework to evaluate the effect of reactive spectrum handoff on real-time traffic in a CRN. The reactive sensing during target channel selection reduces the blocking and forced termination probabilities and improves the channel utilization. The authors of [32] proposed a reactive handoff scheme using dedicated CCC, where SUs can hold multiple channels simultaneously even in the presence of PU using hybrid sharing scheme, which increases the net throughput of the system and keeps the interference temperature within acceptable limits. The authors of [11,12] proposed a proactive spectrum handoff in which target channel selection is achieved proactively, which saves sensing and handshake time during spectrum handoff. The impact of false prediction during target channel selection process, which causes collision among PU and SUs, is ignored in this work. Similarly, the authors of [13] proposed a channel state prediction based spectrum handoff technique for CRAHN, where SUs organize themselves through pseudo-random sequence during channel access. The knowledge of such sequence must be known to each SU prior to channel access in each time slot, which can be challenging in a distributed network. The proposed scheme focuses on minimizing the frequent spectrum handoff by selecting the channel with maximum residual time. This scheme works better when compared with existing reactive spectrum handoff schemes in terms of average throughput and service time. Accuracy of channel state prediction is not considered in this work as well. In [25], the authors presented a prediction based proactive spectrum handoff for distributed secondary network. A distributed channel selection mechanism is presented to avoid collisions among SUs. Common hopping is used to find channel rendezvous. SUs affected due to prediction error attempt for channel access in the next time cycle. EDDT and number of collisions have been reduced compared to reactive spectrum handoff. The authors of [14] proposed an adaptive spectrum handoff strategy that combines the benefits of both reactive and proactive channel switching. The authors used primary prioritized Markov approach to analyze the interaction between PUs and SUs. The accuracy of channel state prediction for proactive channel switching is not considered in this work either. In [15], a spectrum handoff technique is proposed by combining the advantages of reactive and proactive target channel selection process. This scheme considers perfect channel state prediction mechanism for finding the list of idle channels. The prediction accuracy, which directly impacts the performance of network, is not considered. The authors of [11][12][13][14][15] did not consider the impact of false channel state prediction on the performance. Most of the studies based on channel state prediction have limitations due to possibility of false prediction, as studied in [33]. It is very challenging to get an accurate result of spectrum prediction due to time varying nature of radio environment. In addition, knowledge of perfect channel state information of primary network is difficult as the two networks work independently [34]. In [16], the authors proposed a frame structure to exploit the cooperative spectrum prediction and sensing mechanism for a centralized CRN, where the secondary base station predicts the channel state. Cooperative sensing is used to improve the prediction accuracy by reducing the sensing errors. In this work, the imperfect channel state prediction using artificial neural network (ANN) is considered and used for hybrid spectrum sharing mechanism. However, spectrum handoff process is not elaborated. In [17], the authors proposed a channel state predictor using ANN and Hidden Markov model (HMM) and investigated the accuracy of both models and evaluated their performances. The advantages of channel state prediction in this work is applied only to the spectrum sensing process by improving the sensing time. Yang et al. [18] proposed a frame structure that incorporates prediction phase to select channels for real time sensing instead of sensing all the channels, thus reducing spectrum sensing time. Similarly, [19] considered HMM for channel state prediction and its impact on the improvement of sensing delay by skipping the channels predicted busy in the sensing phase, thus only sensing the channels predicted idle. The authors of [16][17][18][19] considered the impact of prediction accuracy; however, benefits of prediction is limited to improve the spectrum sensing time, ignoring the spectrum handoff management process. The authors of [20] proposed a proactive channel switching handoff mechanism to minimize the number of handoff. A list of candidate target channel based on probability of idleness is maintained and sensed during handoff instead of sensing all channels. This reduces the sensing delay as a part of handoff process. However, this work considers centralized network architecture as well. In [21], a probability based proactive spectrum handoff mechanism is proposed where a centralized device computes the probability of idleness for each primary channel and then based on QoS requirements of SUs allocates appropriate predicted channel during handoff. Handoff delay and transmission delay are improved by sensing the right channel for handoff based on prediction probabilities. However, this scheme is targeted for a centralized CRN. In a centralized CRN, a central entity provides coordination among SUs during random channel access, which helps to avoid collisions among the SUs and maximizes the utilization of discovered idle channels. In an ad hoc CRN with no central entity, coordination among SUs during channel access is a challenging task that must be handled with caution to prevent collisions among the SUs. Avoiding collisions becomes even more critical during spectrum handoff than that in general channel allocation scenarios as these collisions result in loss of created opportunities and increase the EDDT for SU packets. The probability of collision during random channel access for N su SUs contending for M sensed idle channel as in [35,36] is given by: It is observed that the probability of collision increases as the number of contenders increases for a fixed number of idle channels. This in turn decreases the average number of successful SUs. The number of idle channels available to SUs in a CRN depends on the primary user traffic intensity. At high primary load, there are very few idle channels, hence the probability of collision increases. The work in this paper is focused on developing a spectrum handoff scheme that minimizes the amount of collisions among the SUs during contention for channel access. If some of the SUs are provided with contention free channel access on predicted idle channels, leaving behind fewer contenders for contention based channel access, thus leading to better spectrum handoff performance. Equation (1) can be rewritten as: where M n = M − N c f are the remaining available idle channels, N c f is the number of SUs getting a contention free channel access and P cx new is the new probability of collision having fewer contenders compared to Equation (1). Based on this philosophy, we have devised a spectrum handoff scheme with due consideration of imperfect channel state prediction. It provides contention free channel access to fraction of SUs, where channel state prediction is true. SUs not getting contention free access during handoff due to false prediction contend for the sensed idle channel in random fashion within the same transmission cycle. This improves the overall spectrum utilization and translates into reduced data delivery time for the SUs in a CRAHN. A comparison of the the proposed scheme with related work in terms of various aspects is presented in Table 1. The related work has been categorized based on the main idea of research, use of imperfect spectrum prediction, benefits of prediction for spectrum handoff, channel access re-attempt within same time cycle, improved frame structure for ad hoc network, network architecture, target channel selection mechanism, performance evaluation of EDDT, throughput, percentage of collisions among SUs, cycle time utilization efficiency. Our work encompasses all of these aspects. Proposed Scheme In this section, we present our proposed proactive spectrum handoff scheme which considers an imperfect channel state prediction and aims to reduce the collisions among SUs during spectrum handoff. This translates into improved EDDT and higher average throughput of the CRAHN. The network model, assumptions, improved frame structure, channel state prediction and target channel selection mechanism are presented below. Network Model and Assumptions We consider a centralized primary network with N ch licensed channels for the PUs, a distributed secondary network having M idle channels, discovered out of N ch licensed channels in a transmission cycle (T cycle ), available to SUs, as depicted in Figure 1. To characterize the effect of multiple interruptions caused by PUs to SUs, we consider slot-based modeling technique [12], where presence of PU at any channel is only checked at the beginning of each slot known as T cycle . Due to the decentralized nature of the secondary network, coordination among the SUs is achieved through a dedicated global CCC, which is leased from the primary network and assumed to be available to all SUs [29]. To simplify the analysis, we have assumed spectrum sensing process to be perfect by ignoring the sensing errors, i.e., miss detection and false alarms [37]. Further PU traffic is considered as a binary stochastic process, i.e., being either idle (H 0 ) or busy (H 1 ) [38]. In addition, PU's arrival process is modeled using Poisson distribution with parameter (λ p ) and channel holding time, which is the expected duration of a PU present on the channel, is modeled as exponential distribution with parameter (µ p ) [39]. The probability of channel being in idle state is P(H 0 ) = (λ p − µ p )/λ p and being in busy state is P(H 1 ) = µ p /λ p . For a secondary network, the preemptive resume identical (PRI) M/M/c queuing model is considered, as shown in Figure 2. SUs can be preempted by PUs during their transmission; therefore, they have to either wait on the current channel to become idle again or switch to target channel to resume the unfinished transmission. For service policy, each channel has two queues: high priority queue for PUs and low priority queue for SUs. However, the service policy within the same priority queue is assumed to be first come first served (FCFS). Poisson Process Arriving Packets Infinite Queue Departing Packets λ s γ packets The average number of packets to be transmitted by an SU is denoted by γ, and the SU arrival rate in each T cycle is a Poisson process with parameter λ s . Thus, the number of packets entering into the system during a T cycle is the product of λ s and γ. The number of idle channels discovered (M) is a function of load on primary network (ρ) and varies in T cycle . Proposed Frame Format The frame format of proposed spectrum handoff scheme is shown in Figure 3, which has been built upon the frame structure presented in figure 1 of [18]. The proposed frame format consists of five phases: T idle , T pred , T ss , T cont and T tran . T idle and T cont phases have been added in such a way that it can work in ad hoc network environment as well as in the case of prediction error to provide a second chance to unsuccessful SUs during contention within the same T cycle . T idle Each T cycle begins with an idle phase T idle , which is used by SUs to synchronize they before start sending control information. The length of this phase is: where aSIFSTime and aSlotTime are the short inter-frame space time and the slot time, respectively, as in [40]. T pred Prediction phase (T pred ) is divided in to two sub-phases: (i) root node selection; and (ii) channel state prediction probabilities reporting phase. Considering the distributed nature of the network, one of the SUs has to be elected as a root node. The responsibility of root node is to predict the probability of idleness of primary channels and share this information among other SUs in the system. The probability of idleness, it is calculated based on the sensing information of the previous T cycle . Therefore, an SU which is present in the system for more T cycle is the best candidate for predicting the channel state probabilities. The field "age in network" is used to elect the root node. The oldest node is the best choice for root node as it has the channel state information for more number of T cycle , which in turn leads to better prediction. Due to the decentralized nature of the ad hoc network, there must be some node providing coordination among SUs during random channel access to avoid collisions. As there is no central coordinator in an ad hoc network, a node has to be elected for reporting the predicted probabilities to all SUs, thus a root node is selected. For this purpose, SUs which are already in the system and were also successful in channel access in the previous T cycle share the information about their age in the network and the number of remaining packets to be transmitted in the sub-slot corresponding to the channel number used by SU in the last T cycle . Note that this sub phase has N ch sub-slots. As the oldest SU in the system is the best candidate for prediction, the SU with highest value of age in the network is selected as a root node. In the case of contention, where two or more SU have identical age, the dispute is resolved with the priority given to the one using the smallest channel number. The number of remaining packets field is used to organize SUs according to shortest job first. In the channel prediction probabilities reporting phase, the root node predicts the probability of idleness for each channel (1 ≤ channel ≤ N ch ) based on the channel usage history, which is known to the root node through channel sensing data collected in the previous nT cycles . Root node shares the predicted probability of idleness of each channel in the corresponding sub-slot of the second sub-phase. At the end of this phase, SU selects the target channel according to Algorithm 1. Unlike existing proactive handoff, where prediction results are assumed perfect, our scheme considers the prediction error. The length of this phase T pred is as follows: where aBitsTime is the time to transmit one bit for a given data rate (R) of the network. Each sub-slot of root node selection sub phase is 8 bits as we have assigned 4 bits for the age in network field and 4 bits for the remaining packets field. The sub-slot of prediction probability reporting phase is 7 bits considering a two decimal place precision for the probability value. T ss T ss is the sensing and sharing phase, which has two sub-phases: (i) sensing; and (ii) sharing. In the first sub-phase, SUs cooperatively sense the spectrum to discover idle channels. Sensing result is shared among users in the sharing sub-phase. If a channel predicted idle is also discovered idle during this phase, it is considered as a true prediction, otherwise a false prediction. This information is shared among other users in the sharing sub-phase. The length of this phase as in [36] is as follows: T cont While the successful SUs, where channel state predictions are true, begin their data transmissions right away on their assigned channels, the unsuccessful SUs, which were deprived of channel access due to false prediction, along with the new arriving SUs, contend for channel access in a random fashion in the contention phase (T cont ) [35]. Each contending SU selects a channel from the list of (M − N c f ) idle channels and exchanges RTS and CTS handshake messages and, if successful, transmits its data in T tran on the channel it won during contention. The length of contention phase is long enough to exchange RTS and CTS frames and according to [36] is given by: where (M − N c f ) is the number of idle channels available for contention phase and RTS and CTS denote the time to send request to send (RTS) and clear to send (CTS) messages. It is to be noted that a primary feature of the proposed spectrum handoff scheme is that some of the SUs, requiring spectrum handoff, get contention-free access and those unsuccessful in contention-free access get a second chance of channel access in the same T cycle , thus significantly reducing the EDDT. T tran After the successful channel access, either by the virtue of true channel state prediction or by contending in random access fashion, the remaining time in the T cycle is used by the SUs for data transmission. The length of this phase is: where T overhead is the time for transmitting control information, which depends on handoff scheme. In the proposed and two existing spectrum handoff schemes, i.e., non-handoff (NHO) and random handoff (RHO), which we used for comparative analysis, T overhead is as follows: T proposed overhead = T pred + T idle + T ss + T cont . In NHO, SUs follow always-staying strategy for target channel selection, whereas always-changing strategy is followed in RHO scheme, in which target channel is randomly selected. The proposed scheme follows prediction based target channel selection for handoff. Channel State Prediction A binary series method [41] for predicting the probability of idleness of a channel is used in the model. In this method, the channel states for last n T cycle are analyzed to find the probability of idleness for the channel in T n+1 cycle is as follows: S n+1 ← S n , S n−1 , S n−2 ........, S 2 , S 1 . As per conventional theory of probability [41,42], the probability of channel being idle in the T n+1 cycle is defined as a ratio of the number of idle channels discovered to the total number of channels sensed prior to (n + 1) th T cycle . where S i is the state of channel during T i cycle , which is "1" if channel is idle and "0" if busy. P Th is the probability threshold. The channel is considered as predicted idle if the P S n+1 idleness as given in Equation (12) is greater than P Th , otherwise predicted busy. To consider imperfect spectrum prediction, the probability of prediction error (P e p ) is given as: where P(H 0 ) and P(H 1 ) are dependent on the load on primary network (ρ), e.g., at ρ = 0.6 on average 60% of the channel are occupied by the PUs and 40 % are available to the SUs, i.e., P(H 1 ) = 0.6, similarly P(H 0 ) = 0.4. The probability distribution of true channel state and predicted state is presented in Table 2 [18]. Table 2. True channel state and prediction probabilities. True Channel State (Sensing) Prediction Probability It yields that the probability of channel predicted idle (P S n+1 idleness > P Th ) is given as: and the probability of channel predicted as busy (P S n+1 idleness < P Th ) is as follows: Target Channel Selection Target channel selection is achieved using Algorithm 1, where selected target channel is dependent on the prediction result. In the case of true prediction, contention free channel access will be granted using the shortest job first principle. Channels are arranged in highest to lowest probability of being idle, whereas SUs are arranged from lowest to highest value of the number of remaining packets, i.e., shortest job first. Due to prediction errors, unsuccessful SUs contend for channel access in the same T cycle using procedure FalsePrediction of Algorithm 1, where SUs contend for channel access in random fashion resulting in either success or failure. In the case of failure, SUs wait for beginning of next T cycle . The whole working procedure of the proposed scheme is elaborated with the help of flow diagram in Figure 4. for SU ∈ System do /* existing SUs */ 3: sort(ascend) /* arrange SUs with respect to RemPkts in lowest to highest order*/ 4: end for 5: for Ch i = 1 : N ch do 6: sort(decend) /* arrange all Channels with respect to prob of idleness in highest to lowest order*/ 7: end for 8: for j ← 1, length(SU) do 9: Ch j ← SU j /*Channel with jth highest prob of idleness is allotted to SU with jth shortest job*/ 10: if Ch j is sensedidle then 11: transmit SU j using Ch j /* contention free access as prediction is true*/ contenders ← SU n + SU f p 22: M n ← listo f availableidlechannels 23: begin contention for channel access/* All contenders contend for channel access in random access fashion*/ 24: if collision then 25: wait until beginning of next T cycle 26: else 27: transmit on channel accessed successfully during contention 28: end if 29: end procedure Simulation and Results In this section, the performance evaluation of the proposed spectrum handoff scheme and its comparison with NHO and RHO schemes are presented. Simulation parameters used for performance evaluation are presented in Table 3. The basic MAC protocol parameters for the proposed scheme are adopted from IEEE 802.11a [43]. A single slot time (aSlotTime), CR-SIFS (aSIFSTime), CR-RTS and CR-CTS are 9 µs, 16 µs, 24 µs and 24 µs, respectively. Data rate (R) of channel is considered as 54 Mbps, therefore single bit duration (aBitsTime) is 1/R = 0.0185 µs. T idle , T pred , T ss and T cont are calculated using Equations (3)- (6). Total number of channels available to primary network is fixed at 100 and results are obtained by varying the load on primary network (ρ) within a range of [0.0, 0.9]. Mean number of contention slots, which is the number of idle channels sensed in each T cycle for contention, depends on the ρ and is calculated as M = N ch (1 − ρ). Mean SU arrival rate (λ s ) in each T cycle is varied within a range of [0.5, 1,2]. Increasing the λ s increases the number of contenders against M idle channels in a T cycle and affects the rate of collision among SUs, eventually increasing the EDDT. Similarly, mean number of packets to be transmitted by the newly arriving SU (γ) is considered as [3,5,10]. Increasing the γ requires more time by the SUs to complete transmitting their data packets before departing out of the system. Therefore, for a high value of ρ, most of the channels are occupied by the PUs leaving behind fewer opportunities for the SUs, hence causing SUs to remain in the system for more number of T cycle due to higher number of collisions. Service capacity of the secondary network is measured in terms of average number of packets arriving into the system during a T cycle , i.e., λ s γ, and average number of packets departing out of the system after successful transmission served by c channel and is denoted by µ s = c/γ, where c ≤ M is the average number of channel out of M idle channels successfully utilized for data transmission. For a stable system the net arrival rate of packets coming in to the system must be equal or less than the net departure rate of packets out of the system, i.e., λ s γ ≤ c/γ. Congestion in the system increases as λ s γ increases that can eventually choke the system. The performance analysis of the proposed scheme and the comparison with NHO and RHO has been conducted in terms of the percentage of collisions among SUs, EDDT, average throughput and cycle time utilization efficiency. Start Beginning of T cycle . Used for SUs to synchronize themselves. SU advertise its age in the network & remaining packets using one of the designated sub-slot. Is selected target channel also sensed idle? Sense and share information of channels discovered idle. Contend for channel access out of (M-c f ) idle channel sensed in random access fashion. Is collision among SUs? Transmit (Contention-based random channel access). Percentage of Collisions among SUs The probability of collisions during random channel access when SUs contend for an idle channel is given in Equation (1). The probability of collision increases as the number of contenders increases or the number of available idle channels decreases. The number of idle channel discovered in each T cycle is dependent on the load on primary network (ρ). Figures 5-7 show the comparison of percentage of collisions among SUs for RHO, NHO and the proposed scheme. It is observed that the percentage of collisions for NHO remains very low for the entire range of ρ. This is due to the fact that, in NHO, the interrupted SU remains loyal to the channel and instead of switching the channel keeps waiting on the channel until it becomes idle again. Thus, target channel selection is done only until the first successful channel access. The same channel is then used afterwards until SU completes its data transmission. In this case, the collisions are only among the new arriving SUs and existing SUs waiting on the channel. Primarily, collisions occur when SUs contend for a channel in random fashion in an uncoordinated or uncontrolled manner. Therefore, the main comparison for percentage of collisions among SUs was between the proposed scheme and RHO, where SUs attempt to access the channel randomly. The gain in proposed scheme lies in true prediction, where SUs organized according to shortest job first (SJF) principle and access the channel in a coordinated manner. In the proposed scheme, contention-based access is required only when the prediction goes wrong. Figure 5 shows that, as the ρ increases, the percentage of collisions among SUs increases, although it increases very slowly, as depicted in the mini-graph. However, beyond ρ = 0.8, the percentage of collisions increases drastically as there are very few opportunities left for SUs. The mini-graph shows better performance for the proposed scheme but the gain achieved in the proposed scheme at high load is much more prominent: at ρ = 0.9, λ s = 0.5, γ = 5, collisions among SUs in proposed scheme is 6.41% compared to 83.28% in RHO scheme. As the SU arrival rate increases to λ s = 1 and λ s = 2, as shown in Figures 6 and 7, respectively, an early rise in the percentage of collisions among SUs is observed. However, the proposed scheme remains better throughout. Figure 8 shows the comparison of mean extended data delivery time, which is the time an SU spends in the system in transmitting all of its data packets before departing out of the system. By virtue of reduced collisions among SUs, the proposed scheme shows better performance throughout the considered range of load on primary network (ρ) in terms of EDDT in comparison with RHO and NHO. Extended Data Delivery Time The mean EDDT values for RHO and proposed scheme begin with the value close to γ and remain at that level for load on primary up to 80% (ρ = 0.8), and sharply rise afterwards. This is due to the fact that reduced opportunities are available for the SUs at higher PU load and result in more collisions and prediction errors. The mean EDDT value for NHO begins with value closer to γ and gradually increases as the load on primary network increases. This is due to the fact that, when ρ increases the particular channel on which the SU stays, under always staying policy, becomes unavailable causing the SU to wait more number of cycles. Further, we discuss in detail the effect of variation of mean number of packets (γ) and SU arrival rate (λ s ) on EDDT. Effect of Mean Number of Packets (γ) on EDDT The mean number of packets (γ) transmitted by the arriving SUs has direct impact on the mean EDDT. As the γ increases, SU has to stay in the system for more number of cycles to complete its data transmission. The net rate of packet arrival (packets entering into the system) must be limited in such a way that the system does not destabilize and choke. It demands a higher rate of SU departure out of the system compared to SU arrival rate. As we increase the mean number of packets to be transmitted by SUs to γ = 3, γ = 5, and γ = 10, as in Figures 8-10 for the fixed value of λ s , we observe an increase in the EDDT. The proposed scheme outperforms the other two schemes in comparison; however, at the higher value of γ and high load on the primary network (ρ = 0.9), an abrupt behavior is observed for RHO and the proposed scheme. In Figure 10, the dip observed after 80% load on primary network in RHO and the proposed scheme is due to the fact that there are very few idle channels available, resulting in very few SUs being able to complete their data transmission as evident from the mini-graph. The mini graph clearly indicates that the proposed scheme is able to serve a higher number of SUs than that in RHO, yet the abrupt behavior observed is due to the system choking and inability of the proposed simulation model to average out the EDDT, as we average only for the completed jobs. Under identical conditions, NHO is still able to successfully serve some SUs, primarily due to always-staying policy. At low primary load, the net arrival of packets coming into the systems is less than the packets departing out of the system as SUs are able to discover more idle channels. As the primary load increases the opportunities for the SUs decreases, requiring more time to be spent in the system to complete their transmission. With the system already getting congested, the new arriving SUs continue to pile up in the system, eventually leading to instability and choking of the system. The abrupt behavior in the graph is due to the very same reason and depicts an unstable condition in the system. Effect of Mean SU Arrival Rate (λ s ) on EDDT As the SU arrival rate λ s decreases from 1 to 0.5, keeping the γ fixed, as shown in Figures 10 and 11, respectively, the system remains stable even at higher primary load value, i.e., (ρ = 0.9). Keeping the γ fixed and limiting the SU arrival rate λ s reduces the congestion on the CRN, thus allowing higher number of SUs to complete their data transmissions. On the other hand, increasing the λ s increases the EDDT because large number of SUs accumulate in the system and in turn increase congestion. Consequently, SUs have to spend more time in order to complete their data transmission for all schemes. Except for the values of load on primary network (ρ) where system chokes due to unavailability of radio channel, the proposed scheme successfully reduces the EDDT. Average Throughput of the System Throughput of the system is the amount of data transmitted in a given time period, so, during a T cycle , throughput of the system in terms of bps/Hz [18] is: where T overhead is time consumed in transmitting the control information and SNR su is the signal to noise ratio of the transmitted signal measured at the SU receiver. Therefore, throughput of the system for NHO, RHO and the proposed scheme using Equations (8)-(10), respectively, is: Figure 12 shows the average throughput of the system for λ s = 0.5 and γ = 10; the proposed scheme outperform the two other spectrum handoff schemes. The SU arrival rate (λ s ) and mean number of packets (γ) has a direct impact on the average throughput of the system. As shown in Figure 13, changing the rate of SU arrival (λ s ) from 0.5 to 1, while keeping the mean number of packets (γ) fixed, the maximum average throughput of the system increases by a factor of two, i.e., from 260 Mbps to 535 Mbps. This is due to the fact that doubling the SU arrival rate (λ s ) doubles the concurrent SUs in the system. As long as the system is below the choking level, the SUs are able to tap more opportunities, therefore increasing the maximum average throughput of the system. By comparing the results of Figures 13-15, it is observed that, as the load on secondary network decreases by reducing the mean number of packets (γ), the system chokes at a higher value of load on the primary network (ρ). Average throughput of the system in Figures 12 and 14 shows identical result as the product of λ s and γ are same. It is revealed that the average throughput of the system is dependent on the product of λ s and γ. Figure 14. Average throughput of the system for λ s = 1, γ = 5. Cycle Time Utilization Efficiency SUs utilize only a portion of the cycle time in a given T cycle on the allocated channel. The percentage utilization of T cycle is dependent on the idle, prediction, sensing sharing and contention time, which accounts for total overhead time (T overhead ). Cycle time utilization efficiency (η T cycle ) is the ratio of cycle time utilized for transmission (T tran ) and total cycle time (T cycle ). The percentage utilization efficiency for data transmission is given as follows: Based on Equations (8)- (10) and (20), η T cycle for RHO, NHO, and the proposed scheme, respectively, is given as follows: Although it is slightly less efficient for proposed scheme than NHO, it is more than compensated in the overall system utilization by virtue of prediction. True prediction not only eliminates contention phase, but also improves channel successful utilization. An identical behavior is observed on increasing ρ, as shown in Figure 17, with a slight improvement for RHO, but still lower than the proposed scheme. Conclusions and Future Work This work has focused on improving the performance of spectrum handoff for the SUs in a cognitive radio ad hoc network (CRAHN) and has developed a proactive partial collision free scheme that is based on imperfect channel state prediction. To the best of our knowledge, this proposed spectrum handoff scheme is the first attempt to include a prediction model that takes into consideration the imperfect channel state for reducing the collisions among the SUs, minimizing the extended data delivery time (EDDT) and improving throughput of the SUs in the network. Performance evaluation indicated that the proposed scheme reduces the EDDT compared to that in the random access spectrum handoff and non-handoff schemes by providing collision free access to a certain number of the SUs. The analysis has also revealed that by improving the channel state prediction, increased contention free channel access can be granted, which reduces the collisions on the remaining idle channels for the rest of the SUs. This work can further be extended in several directions that include use of artificial neural network (ANN) or hidden Markov model (HMM) based channel state prediction techniques, incorporation of imperfect sensing in the model (the current scheme has assumed perfect sensing for simplicity), consideration of mobility of SUs and adoption of a priority-oriented class-of-service approach for various types of traffic that can afford quality of service (QoS) guarantees to the time-constrained applications in a CRAHN. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The
10,364.2
2019-10-31T00:00:00.000
[ "Computer Science", "Engineering" ]
RNA Virus Reassortment: An Evolutionary Mechanism for Host Jumps and Immune Evasion Reassortment is an evolutionary mechanism of segmented RNA viruses that plays an important but ill-defined role in virus emergence and interspecies transmission. Recent experimental studies have greatly enhanced our understanding of the cellular mechanisms of reassortment within a host cell. Our purpose here is to offer a brief introduction on the role of reassortment in segmented RNA virus evolution, explain the host cellular mechanisms of reassortment, and provide a synthesis of recent experimental findings and methodological developments. While we focus our discussion on influenza viruses, wherein most of the studies on reassortment have been carried out, the concepts presented are broadly applicable to other multipartite genomes. Introduction Reassortment is an evolutionary mechanism of segmented RNA viruses that plays an important but ill-defined role in virus emergence and interspecies transmission. Recent experimental studies have greatly enhanced our understanding of the cellular mechanisms of reassortment within a host cell. Our purpose here is to offer a brief introduction on the role of reassortment in segmented RNA virus evolution, explain the host cellular mechanisms of reassortment, and provide a synthesis of recent experimental findings and methodological developments. While we focus our discussion on influenza viruses, wherein most of the studies on reassortment have been carried out, the concepts presented are broadly applicable to other multipartite genomes. What Is Virus Reassortment? Virus reassortment, or simply reassortment, is a process of genetic recombination that is exclusive to segmented RNA viruses in which co-infection of a host cell with multiple viruses may result in the shuffling of gene segments to generate progeny viruses with novel genome combinations ( Fig 1A) [1]. Reassortment has been observed in members of all segmented virus families, including, for example, Bluetongue virus [2], but reassortment is most prominently described for influenza viruses as a primary mechanism for interspecies transmission and the emergence of pandemic virus strains [3][4][5]. For instance, reassortment accelerates the rate of acquisition of genetic markers that overcome adaptive host barriers faster than the slower process of incremental increase due to mutation alone. The emergence of new influenza genes in humans and their subsequent establishment to cause pandemics have been consistently linked with reassortment of novel and previously circulating viruses [4][5][6]. In contrast, recombination occurs through a template switch mechanism, also known as copy choice recombination. When two viruses co-infect a single cell, the replicating viral RNAdependant-RNA-polymerase can disassociate from the first genome and continue replication by binding to and using a second distinct genome as the replication template, resulting in the generation of novel mosaic-like genomes with regions from different sources [7,8] such as some circulating recombinant forms of HIV [9]. Although, in principle, recombination can occur in both segmented and non-segmented viruses, reports of recombination in segmented viruses have been frequently disputed [10,11] as weak evidence that arose through laboratory or bioinformatic artifacts [12,13]. Here we focus on virus reassortment using the well-studied influenza virus as an example. How Do Segmented Viruses Reassort within a Host Cell? Essential prerequisites for reassortant include the entry of more than one virus particle into a single host cell, followed by the concomitant production of genome segments within the host cell. Experimental systems have revealed a high frequency of multiple infections [1,14], although there is some evidence suggesting the role of specific viral proteins limiting further infection [15]. Ultimately, the definitive formation of viable infectious reassortants is dependent on the incorporation of one copy of each segment into a virus particle. Two alternative mechanisms for reassortment within the host cell have been proposed. The random packaging model [16,17] posits that viral RNA is incorporated in virions without discrimination (but not other viral or cellular RNA); hence, the likelihood of forming viable reassortants with an entire genome set occurs by chance [16]. However, mounting evidence supports an alternative selective packaging model [18][19][20], which proposes that a virus particle packages eight unique viral RNA segments through specific packaging signals. Experimental visualization of RNA interactions [18] during virus assembly has revealed detailed interactive networks-i.e., epistatic interaction of virus packaging signals-among virus segments, which are thought to play an important role in directing reassortment. Through the experimental swapping of packaging signals between influenza viruses of different types, Essere et al. [19] were able to overcome the bias observed towards specific genotypes. In an extreme case, Baker et al. [19,21] have shown that the swapping of packaging signals of two different species of influenza viruses enabled reassortment to form viable particles that have not been observed in nature, indicating a central role for these packaging signals in reassortment. Intuitively, the emergence of differences in the packaging signals of diverging virus lineages may lead to virus speciation. Such a phenomenon could explain the lack of reassortment between the two influenza virus species (A and B) that share structural and functional similarities and that occupy the same ecological niche. Despite a lack of a mechanistic understanding of the function of packaging signals, these observational studies highlight important implications for viral evolution through epistatic interaction between gene segments and the emergence of novel reassortants. How Is Reassortment Detected? The identification of reassortment is important to detect novel reassortants with increased transmissibility, increased pathogenicity, or those that escape antibody recognition or are resistant to antivirals. Reassortment is most commonly detected through incongruencies in phylogenetic relationships among the different segments of a viral genome [22][23][24][25][26], as gene segments from the same virus isolate occupy conflicting phylogenetic positions due to differences in their evolutionary histories (Fig 1B). Early studies identified reassortment by manually detecting phylogenetic incongruence of different viral segments. However, this method becomes impractical for studying large datasets, especially those with complex reassortment histories with nested reassortments or when there is a lack of phylogenetic support for reassortment among closely related sequences [27]. This has led to the development of several automated reassortment detection methodologies [28][29][30][31], but the phylogeny-based methods have remained the most robust and popular method for detecting reassortment [29,30]. Several extensions of the phylogenetic method have also been successfully applied to estimate past reassortment of viral lineages, including the coalescent-based Bayesian phylogenetics that infer and compare the time of most recent common ancestor (TMRCA) of each segment to infer possible reassortment [32], multi-dimensional scaling of tree distances [25,32], and more recently, using time-resolved Bayesian phylogenetics and trait state changes [33][34][35]. In addition, several distance-based methods exist [27], where degrees of similarity between pairs of viral genomes are used to infer reassortment [36,37]. Recently, a study has used a novel method based on the rapid rate of amino acid replacement post reassortment as a method of detecting a reassortment event [27]. While all the studies listed above are aimed at identifying reassortment events and strains, methodologies that infer an explicit rate of reassortment are rare, but examples include [33,34,38]. What Do Genomic Studies Tell Us about Reassortment? Influenza exhibits high levels of mixed infections in all major hosts [39][40][41][42], with up to 25% of all infections in avian hosts involving multiple influenza subtypes. However, large-scale genomic studies have identified various levels of restrictions on random reassortment between cocirculating influenza viruses, which differ depending on host, subtype, and preferential genetic combinations [35,36,[43][44][45][46]. The greatest frequency of influenza reassortment is observed in their natural reservoir, wild aquatic birds [40], where viruses of different subtypes frequently exchange gene segments. However, reassortment is more restrictive in other hosts, particularly humans. Reassortment between human seasonal influenza viruses of different subtypes (A/H1 and A/H3 viruses) is rare [47] despite co-circulation over 40 years and extensive evidence of mixed infection [39]. Furthermore, studies of human influenza viruses have shown that certain combinations of gene segments were consistently detected in surveillance, suggesting either preferential assortment of these gene segments or a fitness advantage to these combinations. Convincing evidence comes from the two co-circulating and frequently reassorting lineages of influenza B viruses [35,48], but virions consistently contained the polymerase basic 1, 2, and the hemagglutinin (HA) genes (PB1-PB2-HA) from a single lineage [35]. Similarly, preferential combinations of segments are transiently observed for human influenza A viruses [45,46]. What Are the Consequences of Virus Reassortment? The tremendous genomic novelty generated by reassortment confounds all current methods of virus control. Evolutionary studies indicate an advantage for gene lineages with reassorting backgrounds. Specifically, a significant increase in transient amino acid mutations is observed following reassortment [27], primarily in the surface glycoprotein hemagglutinin, the major immunogenic protein of influenza that leads to antigenic change [25,32]. This suggests that the placement of the HA within novel genetic backgrounds through reassortment greatly affects virus fitness and directly influences antigenic variation, contributing to the long-term evolution of the virus. However, reassortment could lead to evolutionary change due to various other factors, including selection pressure induced by herd immunity; the residues being under weak selective constraint; or compensation for fitness costs of mutations accruing elsewhere in the genome [25]. Similarly, the emergence of drug-resistant mutations may be acquired following reassortment, as shown for the emergence of amantadine-resistant H3N2 viruses [49] and oseltamivir-resistant seasonal H1N1 viruses [50]. These studies suggest that reassortment confounds available methods of virus control, although detailed examination of the role of reassortment in driving genome-wide evolution is still needed.
2,139.2
2015-07-01T00:00:00.000
[ "Biology" ]
Immunological Involvement of MicroRNAs in the Key Events of Systemic Lupus Erythematosus Systemic lupus erythematosus (SLE) is an archetype autoimmune disease characterized by a myriad of immunoregulatory abnormalities that drives injury to multiple tissues and organs. Due to the involvement of various immune cells, inflammatory cytokines, and related signaling pathways, researchers have spent a great deal of effort to clarify the complex etiology and pathogenesis of SLE. Nevertheless, current understanding of the pathogenesis of SLE is still in the early stages, and available nonspecific treatment options for SLE patients remain unsatisfactory. First discovered in 1993, microRNAs (miRNAs) are small RNA molecules that control the expression of 1/3 of human genes at the post-transcriptional level and play various roles in gene regulation. The aberrant expression of miRNAs in SLE patients has been intensively studied, and further studies have suggested that these miRNAs may be potentially relevant to abnormal immune responses and disease progression in SLE. The aim of this review was to summarize the specific miRNAs that have been observed aberrantly expressed in several important pathogenetic processes in SLE, such as DCs abnormalities, overactivation and autoantibody production of B cells, aberrant activation of CD4+ T cells, breakdown of immune tolerance, and abnormally increased production of inflammatory cytokines. Our summary highlights a novel perspective on the intricate regulatory network of SLE, which helps to enrich our understanding of this disorder and ignite future interest in evaluating the molecular regulation of miRNAs in autoimmunity SLE. Systemic lupus erythematosus (SLE) is an archetype autoimmune disease characterized by a myriad of immunoregulatory abnormalities that drives injury to multiple tissues and organs. Due to the involvement of various immune cells, inflammatory cytokines, and related signaling pathways, researchers have spent a great deal of effort to clarify the complex etiology and pathogenesis of SLE. Nevertheless, current understanding of the pathogenesis of SLE is still in the early stages, and available nonspecific treatment options for SLE patients remain unsatisfactory. First discovered in 1993, microRNAs (miRNAs) are small RNA molecules that control the expression of 1/3 of human genes at the posttranscriptional level and play various roles in gene regulation. The aberrant expression of miRNAs in SLE patients has been intensively studied, and further studies have suggested that these miRNAs may be potentially relevant to abnormal immune responses and disease progression in SLE. The aim of this review was to summarize the specific miRNAs that have been observed aberrantly expressed in several important pathogenetic processes in SLE, such as DCs abnormalities, overactivation and autoantibody production of B cells, aberrant activation of CD4 + T cells, breakdown of immune tolerance, and abnormally increased production of inflammatory cytokines. Our summary highlights a novel perspective on the intricate regulatory network of SLE, which helps to enrich our understanding of this disorder and ignite future interest in evaluating the molecular regulation of miRNAs in autoimmunity SLE. INTRODUCTION Systemic lupus erythematosus (SLE) is a severe autoimmune inflammatory disease with a broad range of clinical manifestations characterized by loss of tolerance to selfantigens, activation of dysregulated autoreactive T cells and B cells, production of autoantibodies (auto-Abs) and perturbed cytokine activities (1). Approximately 50% of SLE patients develop life-threatening complications, such as nephritis, vasculitis, pulmonary hypertension, interstitial lung disease, and cerebral stroke (2). Current studies suggest that SLE is associated with dysregulation of the innate and adaptive immune responses that are likely rooted in the intricate interactions among environmental stimulants, sex hormone imbalance, genetic predisposition, epigenetic regulation, immunological factors, and other undefined factors, resulting in breach of self-tolerance characterized by uncontrolled activation and expansion of dendritic cells (DCs) and lymphocytes, coupled with the production of copious amounts of anti-nuclear and antiphospholipid antibodies. Even so, current understanding of immunological events that trigger the onset of clinical manifestations of SLE is still in the early stages. At present, primary treatment for SLE is based on conventional nonspecific immunosuppressants, but this treatment option is unsatisfactory because of the associated side effects including infection, malignancy, metabolic disturbances, and infertility (1). Meanwhile, given the multitude of active pathways in a disease as heterogeneous as SLE (1), the role of single target approaches with inhibitors such as antiCD20, antiinterferona and antiIL6 may be too limited. More extensive and in-depth study of SLE from different perspectives will contribute to a more comprehensive understanding of this disease and potentially open up exciting new therapeutic possibilities for treating this multifactorial disease. MicroRNAs (miRNAs) are a large family of endogenous, single-stranded, small (~22 nucleotides), nonprotein-coding RNA molecules that modulate gene expression at the posttranscriptional level and protein synthesis in higher eukaryotes (3). In 1993, Lee et al. (4) firstly found that the Lin-4 gene encoded some small RNAs, rather than proteins, which controlled the temporal development of Caenorhabditis elegans. With the development of molecular cloning and bioinformatics technology, more than 3,800 miRNAs have been identified so far, which are widely distributed in plants, animals, and viruses. Moreover, a recent study estimated that there are 2,300 mature miRNAs in humans (5), whose genes constitute about 1%-3% of the human genome sequence and approximately 1/3 of human genes expression is regulated by mature miRNAs (3,6). It has been reported that the epigenetic ability of miRNAs can regulate a variety of biological processes, including embryo development, cell differentiation, proliferation, and apoptosis, signal transduction, and metabolism (7). In terms of the regulation of the immune system, increasing evidences suggested that miRNAs are involved in the regulation of innate and adaptive immune cells (8). It is not surprising that dysregulation of miRNA expression has been implicated in the progression of a broad range of diseases, some of which have been identified as diagnostic and/or prognostic biomarkers of various diseases, including cancer (9), diabetes (10), viral infection, cardiovascular diseases (11), and kidney diseases (12). In addition, miRNAs can affect the occurrence and development of autoimmune diseases through different pathways including the release of inflammatory mediators, innate immune responses, lymphocyte function, the signaling of toll-like receptors (TLRs), and nuclear factor (NF)-kB (13). In 2007, Dai et al. (14) found differences in the miRNA expression profiles of SLE patients and normal controls, as seven miRNAs were down-regulated (miR-196a, miR-L7-5p, miR-409-3p, miR-141, miR-383, miR-112, and miR-184) and nine were up-regulated (miR-189, miR-61, miR-78, miR-21, miR-142-3p, miR-342, miR-299-3p, miR-198, and miR-298), suggesting that miRNAs are potential diagnostic markers of SLE and may be important factors related to the pathogenesis of the disease. With the publication of work of Dai et al. (14), more and more studies have demonstrated that the aberrantly expressed miRNAs have the ability to promote different immunological events of SLE, but the exact mechanism of these miRNAs is still largely unknown. The miRNAs in immune cells during active SLE has opened up potential new avenues for a more comprehensive understanding of SLE and may provide new therapeutic clues to improve patient outcomes, which remains to be confirmed and requires further investigation. Alterations to miRNAs have been exploited as potential tools and targets for novel therapeutic approaches for many diseases. The first miRNAbased therapeutic agent was approved in 2013 for the treatment of familial hypercholesterolemia. Many miRNA-targeted therapies have been clinically advanced, including phase I clinical trials of miR-34, a mimic of the tumor suppressor miRNA for the treatment of cancer (15), and phase II clinical trials of anti-miRNAs targeting miR-122 for the treatment of hepatitis (NCT01646489, NCT01727934, NCT01872936, NCT01200420). MiRNAs have an intriguing potential role in the development and deterioration of SLE, which may allow for the development of more effective therapies with fewer side effects to mitigate this disorder. Therefore, this article reviews the current understanding of miRNAs and summarizes the impact of miRNA dysregulation in several important immunological events in SLE, including the dysfunction of immune-related cells, aberrant immune cell signaling, and the production of inflammatory cytokines. The summary of this study helps to enrich the current understanding of the intricate immunological regulation network of SLE and to stimulate future interest in evaluating the molecular regulation of SLE. BIOGENESIS AND FUNCTION OF miRNAs In humans and animals, synthesis of mature miRNAs is initiated by the transcription of nuclear genes into primary RNA transcripts (pri-miRNAs) (3), which are cleaved by the ribonuclease III (RNase III) enzyme Dorsha and the protein DiGeorge syndrome critical region 8 into precursor miRNAs (pre-miRNAs) (6). After being transported from the nucleus into the cytoplasm by exportin-5/Ran-GTP, pre-miRNAs are processed by Dicer and trans-activation response RNA-binding protein to yield miRNA duplexes. One of the functional chains is loaded into the RNA-induced silencing complex (RISC) to form an asymmetric RISC assembly (16), which interacts with the target messenger RNA (mRNA) to regulate the expression of target genes after transcription (17) (Figure 1) via two mechanisms of action depending on whether the singlestranded miRNA in the RISC assembly is completely complementary to the target mRNA 3'-untranslated region (3'-UTR); if so, the mRNA is targeted and degraded by the RISC assembly, and if not, the mRNA blocks translation of the target gene. The miRNAs mainly follow the first mechanism in plants and the second in animals. A complex regulatory network is formed between the miRNA and target gene mRNAs, thus affecting the course of disease through post-transcriptional regulation without changing the gene sequence (18). In short, the extent of complementarity between the miRNA seed region and the target mRNA 3'-UTR determines the mechanism of miRNA-mediated gene regulation/translation repression or miRNA cleavage and degradation. THE ROLES OF miRNAs IN DCS ABNORMALITIES IN SLE In general, DCs have a unique sentinel function, continuously detecting danger signals from the environment through innate pattern-recognition receptors such as TLRs, which have the ability to capture antigens through binding to microbes or endogenous tissues (8). Inappropriate or dysfunctional antigen presentation by DCs might promote the breakdown of Tcell and Bcell tolerance in SLE (19). Patients with SLE show multiple DC abnormalities, including a decrease in the number of circulating normal myeloid DC (mDC) but an increase in the number of plasmacytoid DC (pDC). Similar to mDCs, pDCs upregulate the expressions of T-cell costimulatory molecules such as CD80, CD86 and CD40 upon antigen stimulation, and serve as antigen presenting cells to prime and activate T cells (20). Distinctively, the pDC subset specialize in producing type I interferon in response to single stranded RNA and hypomethylated CpG DNA, via TLR7 and TLR9 (21,22). These unique features allow pDCs to play a crucial role in the pathogenesis of SLE and have been shown to correlate with disease manifestations including the SLE hallmark anti-dsDNA autoantibodies. Recent studies have described the involvement of let-7c, miR-155, miR-150 in regulating the functions of pDC in response to TLRs stimulation ( Figure 2A). B lymphocyte-induced maturation protein-1 (Blimp1) is identified as an important transcriptional repressor of let-7c miRNA (23). Expression of let-7c miRNA influences the differentiation and functional homeostasis in B cells and T cells respectively (24). In the DC-specific absence of Blimp1, an increase in let-7 miRNA results in a broad spectrum of proinflammatory DC phenotype, mediated in part through suppression of suppressor of cytokine signaling 1 (SOCS1) expression (23). According to this research, let-7c miRNA enrich our understanding of the mechanisms underlying of polymorphisms in Blimp1 associated with risk for human autoimmune disorders such as SLE and inflammatory bowel disease. The pDCs derived from symptomatic mice showed functional hypersensitivity to TLR7, as represented by the elevated upregulation of CD40, CD86 and MHC class II molecules. In addition, Yan et al. (25) showed an enhanced induction of miR-155 in SLE mice in response to TLR7 stimulation, and CD40 expression was significantly upregulated with a negative correlation to the miR-155 primary target SH2 domain-containing inositol 5'-phosphatase 1 (SHIP-1) expression. According to the research of Gao et al. (26), miRNA-150 inhibited the expression of TREM-1 which potently amplified the function of TLR4 and then enhanced the inflammation responses in splenic cDCs in lupus prone mice. These researches enrich our understanding of pathogenesis of DCs dysfunction in SLE. Compared with the role of miRNAs in adaptive immune cells, the contribution of miRNAs to DC activation has been examined in only a few studies, and further research is needed in this field. THE ROLES OF miRNAs IN OVERACTIVATION AND AUTOANTIBODY PRODUCTION OF B CELLS IN SLE Abnormalities of B cells are important characteristics of the pathogenesis of SLE. Although it is well-known that B cells have the ability to produce autoantibodies, they also mediate venomous functions through antibody-independent activities, including the presentation of antigen to T cells, co-stimulatory functions via the expression of accessory molecules engaging stimulatory receptors on T cells and the production of cytokines (27). Furthermore, B cell depletion therapy can have beneficial effects on patients with these disorders (28), highlighting the importance of B cells in the pathogenesis of autoimmune diseases. These autoreactive antibodies promote the pathogenesis of through cause acute and chronic inflammation and tissue necrosis with the participation of complement, or cause tissue cell destruction by directly interacting with their antigens, thus leading to multi-system damage of SLE (29,30). Besides, B cells can contribute to SLE pathogenesis through additional pathways. For example, B cells can work as antigen presenting cells and correlate with activation of other crucial lymphocytes in SLE; Certain subtype of B cells may secrete antiinflammatory cytokines in SLE. Therefore, the study of B lymphocytes can potentially unravel important pathogenic mechanisms of SLE. Previous studies have delineated several signaling pathways that contributed to the over-reactivity of B cells in SLE, including Janus kinase/signal transducer and activator of transcription (JAK-STAT), B cell receptor/ phosphatidylinositol 3-kinase (PI3K)/protein kinase B (AKT), and TLRs (31), although the detailed molecular mechanisms remain to be elucidated. Mice that are deficient in various inhibitory molecules that dampen B-cell receptor (BCR) signaling, such as SHIP-1 (32), Lck/Yes novel tyrosine kinase (LYN) (33), or Fcg receptor IIb (FcgRIIb) (34), develop systemic autoimmunity. Studies of the aberrant expression of several miRNAs in the B cells of SLE patients have reported that miR-7, miR-155, miR-30a, and miR-15a were up-regulated, while miR-1246 were down-regulated ( Figure 3), and researches have preliminarily demonstrated their role in SLE. A recent study reported that he up-regulation of miR-7 in the B cells of SLE patients can negatively regulate the expression of phosphatase and tensin homolog (PTEN), which results in upregulated activation of PI3K/AKT signaling (35,36). MiR-7mediated down-regulation of PTEN/AKT signaling promoted the differentiation of B cells into plasmablasts/plasma cells and formation of spontaneous germinal center in a MRL lpr/lpr mouse model (37). The treatment of miR-7 antagomir showed its therapeutic value in vivo in MRL lp r/ lp r mice, which consequently alleviated the clinical manifestations of organ damage in lupus mice model (37). Thai et al. (38) reported that ablation of miR-155 in lupus-prone mice with death receptor deficiency (Fas lpr ) reversed the reduced expression of SHIP-1 to normal levels, which acted as downstream of inhibitory cell-surface receptors, such as FcgRIIb. Subsequently, these processes contributed to decreasing serum levels of IgG anti-dsDNA antibodies and kidney inflammation, and then reduced autoantibody responses in lupus-like diseases. In addition, miR-155 acts as a suppressor of autoimmunity through transcriptional repression of PU.1 (a crucial regulator of B-cell development) and TNF-a, which in turn suppresses B cell-activating factor belonging to the TNF family and CD19 protein expression (39). Like miR-7 and miR-155, Liu et al. (40) observed that miR-30a expression was significantly increased in the B cells of SLE patients and miR-30a directly decreased expression of LYN by targeting the 3'-UTR of LYN mRNA. LYN is a member of the Src family of non-receptor tyrosine kinases (41) and a key mediator in several pathways of B cell activation, such as CD19 and CD180 (33). In addition, significantly decreased LYN levels have been observed in the B cells of SLE patients (42). Thus, high miR-30a expression can regulate B cell proliferation and antibody production in SLE patients, suggesting that miR-30a might be involved in pathogenesis of SLE. B cell lymphoma-2 (Bcl-2) is an important component of the apoptotic pathway. In the human genome, four members of the miR15/16 family share the same 9bp Bcl-2-complementarity sequence. This functional redundancy indicates that Bcl-2 expression is regulated by a very fine mechanism. MiR-15a has been demonstrated to potentially regulate the balance of B-10 and B-1 cell subsets and was positively correlated to autoantibody levels in lupus due to differential expression in B cell subpopulations (43). Downregulation of Bcl-2 expression by miR-15a overexpression activates the apoptotic pathway of the B-10 subset (44), which has been shown to suppress lupus in a B/W mouse model and other inflammatory diseases via preferential production of IL-10 (45). The induced loss of this regulatory B cell subset may lead to more prominent autoantibody production (46). On the contrary, miR-1246 expression was negatively correlated with the activation of B cells in SLE patients (47). Further research verified that decreased miR-1246 expression reduced the inhibitory effect on the expression of early B cell factor 1 (EBF1), which contributed to the development, activation, and proliferation of B cells via activation of the AKT signaling pathway. The upregulated expression of EBF1 increases the production of the B cell surface co-stimulatory molecules CD40, CD80, and CD86, which then enhances B cell function. As listed above, the underlying mechanism of miRNA-7 or miRNA-155 contribute to B cell hyperresponsiveness and autoantibody responses have been well studied. Further, single intervention with miRNA-7 or miRNA-155 alleviate the disease manifestation or inhibits lupus development in mice model suggesting that their critical role in lupus progression and their potential as treatment strategy in SLE. The miRNA-15a, miRNA-30a and miRNA-1246 are demonstrated to be involved in certain factors that contribute to B cells overactivation. Further studies are needed to show the role of miRNAs in SLE and whether it has the potential to be developed into new therapeutic targets. THE ROLES OF miRNAs IN ABERRANT ACTIVATION OF CD4 + T CELLS IN SLE The critical role of T cells in the pathogenesis of SLE has been confirmed by initiating and amplifying the inflammatory process through directly contacting with other immune cells in lymphoid organs, secreting pro-inflammatory cytokines or mediating direct effects on target tissues. Naive CD4 + T cells can differentiate into various Teff cell subsets, including Th1, Th17, Th2 and follicular helper T (Tfh) cells. Continuously stimulated T cells in lupus are likely to contribute to the disease by secreting inflammatory cytokines and supporting B cells to produce a wide variety of high affinity autoantibodies through contactdependent mechanisms (mediated by CD40L:CD40, OX40L: OX40, and so on), which is an important characteristic of SLE. In addition, stimulation of autoreactive CD4 + T cells can foster the differentiation of CD8 + T cells into cytotoxic T lymphocytes along with the employment of inflammatory cytokines. However, the mechanism that causes the aberrant activation, differentiation and function of T cells in SLE remains largely unclear. Genome-wide analysis has revealed that global DNA methylation levels are reduced by 15%-20% in the CD4 + T cells of patients with active SLE (48), especially genes involved in disease pathogenesis and progression, such as ITGAL, CD40LG, CD70, and PPP2CA. DNA methylation is an elementary determinant of the chromatin structure that is established during development by de novo DNA methyltransferases (DNMTs) with potent suppressive effects on transcription. DNMT1 serves to maintain the methylation status of proliferating cells (49). Moreover, the T cells of mice treated with procainamide and other inhibitors of DNA methylation can induce SLE in recipient mice (50). The pathological significance of the autoreactivity induced by inhibiting DNA methylation in T cells was further investigated. Pan et al. (51) demonstrated increased expression of miR-21 and miR-148a in SLE patients and SLE-prone MRL/lpr mice, and that both miRNAs reduced DNMT1 expression, which contributed to epigenetic changes via DNA hypomethylation. MiR-21 indirectly inhibits DNMT1 expression by targeting the important autoimmune gene RAS guanyl nucleotide-releasing protein 1, which is a critical regulator of the upstream RAS/ mitogen-activated protein kinase pathway signaling cascade of DNMT1 in T cells (51). Another study confirmed that enhanced miR-21 expression also contributed to the aberrant phenotype of T cells in SLE which could be through interaction with its predicted target gene, programmed cell death protein 4 (52). Silencing of miRNA-21 in vivo can efficiently alter the course of autoimmune splenomegaly in lupus mice (53). On the other hand, miR-148a directly inhibits DNMT1 expression by targeting the protein coding region of the transcript. These data clearly showed that abnormally expressed miRNAs in SLE patients had a critical functional link with the aberrant DNA hypomethylation in lupus CD4 + T cells, resulting in the overexpression of autoimmune-associated methylationsensitive genes, such as those that encode CD70 (tumor necrosis factor (ligand) superfamily, member 7 [TNFSF7]), CD40 ligand (TNFSF5), and lymphocyte function-associated antigen 1 (LAF-1, integrin aLb2, CD11a/CD18) (54), which contributed to the autoreactivity and overstimulation of CD4 + T cells in SLE (51). Many studies have found that expression of miRNAs, such as miR-126 (55) and miR-29b (56), was markedly altered in the CD4 + T cells of SLE patients and was involved, either directly or indirectly, in decreasing DNA methylation levels, which led to aberrant activation and differentiation of CD4 + T cells. In addition to aberrant DNA methylation, miRNAs in the CD4 + T cells of patients with SLE can regulate T cell activation in other ways. A recent study found that miR-142-3p/5p expression was decreased in the CD4 + T cells of patients with SLE (57). MiR-142-3p specifically targets members of the signaling lymphocytic activation molecule (SLAM) family, including interleukin (IL-10) and CD84, while miR-142-5p targets the 3'-UTR of SLAM-associated protein (SAP). Thus, decreased miR-142-3p/5p expression contributes to the upregulation of CD84 and IL-10/SAP, resulting in increased T cell function and immunoglobulin (Ig) G production in cocultured B cells. Reduced expression of miR-142-3p/5p in the CD4 + T cells of patients with SLE activates T cells and hyperstimulates B cells (58). Besides, overexpression of signal transducer and activator of transcription 1 (STAT1) and underexpression of apoptosis inhibitory protein 5 (API5) in the T cells of SLE patients are targeted by miR-145 and miR-224, respectively. Thus, aberrant expression of miR-145 and miR-224 can promote T cell activation-induced cellular apoptosis by suppressing API5 expression and SLE-associated nephritis by enhancing STAT1 expression (59) ( Figure 2B). According to current studies, a critical functional link between miRNAs and the lupus CD4 + T cells have been connected by the potential interplay between miRNAs and critical molecules such as SLAM family, STAT-1, DNMT1 which contribute to T cells abnormalities and hypomethylation in SLE. Moreover, transfection of miRNAs or miRNA inhibitors have beneficial effects on alleviating CD4 + T cell disease phenotype, highlighting the important role of miRNAs in lupus-like CD4 + T phenotype transformation. Further researches are needed to evaluate the place of miRNAs in the complicated regulatory networks of DNA hypomethylation in SLE. THE ROLES OF miRNAs IN BREAKDOWN OF IMMUNE TOLERANCE IN SLE SLE is characterized by a wide array of immune tolerance breakdown with systemic inflammation involving the dysregulation of immune responses. Tregs are a unique subpopulation of CD4 + T cells with an indispensable role in maintaining self-tolerance by suppressing autoreactive lymphocytes and suppressing excessive immune responses by controlling the responses of Teffs (60,61). Tregs characteristically express CD25 (IL-2 receptor a chain) and the lineage-specific transcription factor forkhead box P3 (Foxp3). An imbalance between effector T cells (Teffs) and regulatory T cells (Tregs) is central to the pathogenesis of SLE. Pathogenic Teffs in SLE are mainly Tfh and Th17 cells. Tfh cells assist the activation of B cells to produce autoantibodies, which results in multiple organ damage. Th17 cells secrete pro-inflammatory cytokine that amplify immuno-inflammatory responses, resulting in tissue damage. Previous studies have reported that the proportions of Tfh and Th17 cells were increased in SLE patients and correlated with disease severity (62), although the underlying mechanisms remain unclear. MiR-125a is commonly downregulated in the peripheral CD4 + T cells of patients with various autoimmune diseases, such as SLE and Crohn's disease, which suppresses several factors of Teffs, including STAT3, IFN-g, and IL-13 (63). A recent study reported consistent delivery of miR-125a into splenic T cells in a mouse model of SLE with the use of a nano-delivery system, which significantly alleviated disease progression by reversing the imbalance of Teffs and Tregs (64). These findings point to miR-125a as a critical factor to restrict the development of SLE by stabilizing Treg-mediated immune homeostasis. Xie et al. (65) reported that miR-34a in the peripheral blood mononuclear cells (PBMCs) of SLE patients played a potential role in disease activity and expression levels were positively correlated with several serum disease indexes, including rheumatoid factor, antistreptolysin antibody, erythrocyte sedimentation rate, and C-reactive protein. Further research demonstrated that miR-34a attenuated human and murine Foxp3 expression (66, 67) by targeting the 3ʹ-UTR, and then limited the differentiation of Tregs, which impaired the balance of Tregs and Th17 cells. The release of IL-6 or tumor necrosis factor (TNF)-a in the inflammatory environment can activate the NF-kB pathway and increase the expression levels of miR-34a by enhancing promoter activity, resulting in Foxp3 downregulation and Treg/Th17 imbalance (65). Meanwhile, miR-31 overexpression also inhibits differentiation of Tregs by targeting Foxp3 and other molecules that are indispensable for the development of Tregs, such as G protein-coupled receptor class C group 5 member A and protein phosphatase 6c (68,69). MiR-142-5p positively regulates intracellular levels of cyclic adenosine monophosphate (cAMP) to maintain the suppressive function of Tregs (58). In contrast, miR-142-3p can restrict cAMP levels in CD4 + T cells, which compromises the inhibitory function of Tregs (70). Many studies have confirmed that the expression levels of the abovementioned miRNAs were markedly altered in the Tregs of SLE patients and aberrant regulation of Tregs was involved in the development of SLE. Other miRNAs, such as miR-99a, miR-17 (71,72), and miR150 (73), also have the ability to regulate the function of Tregs either directly or indirectly according to researches, which potentially provides new clues for future research on the development of SLE ( Figure 2C). In recent years, other cells with regulatory capabilities were revealed such as B regulatory cells and NK cells. As we mentioned above, miR-15a overexpression contributes to activating the apoptotic pathway of the B-10 subset (44) and then weaken its suppress effects on SLE and other inflammatory diseases (45). In another study, the recognition between iNKT cells and B cells through CD1d is associated with the tolerance of NKT cells (74). The increased miR-155 contribute to inhibiting CD1d expression in B cells by directly targeting the 3'-UTR of CD1d, and thus impair the tolerance of NKT cells (75). THE ROLES OF miRNAs IN ABNORMALLY INCREASED PRODUCTION OF INFLAMMATORY CYTOKINES IN SLE Cytokines are a family of small proteins that play crucial roles as messengers of immune pathways and in the regulation of leukocyte activation. It is thought that the balance between proinflammatory and anti-inflammatory cytokines influences the clinical manifestations in many inflammatory diseases such as SLE and rheumatoid arthritis. These cytokines are mainly produced by helper T (Th) cells, which can be classified based on functional effects into T helper (Th) 1 (IFN-g, IL-2, TNF-a), Th2 (IL-4, IL-5 and IL-6), Th17 (IL-17), and Tregs (IL-10). High levels of inflammatory cytokines may lead to the exacerbation of inflammatory responses, apoptosis, and production of autoantibodies that initiate and sustain SLE disease activity (76,77). Dysregulation of chemokine production has been linked to the clinical manifestations and disease activity of SLE (78). Multifactorial components contribute to the immune modulation of cytokines, such as genetic polymorphisms, environmental factors, and hormonal alterations, among others, leading to irreversible impairment of self-immunological tolerance. Type I IFNs are a family of cytokines produced by innate immune cells, especially plasmacytoid DCs and tissue cells, when viral components are perceived via retinoic acid-inducible-like receptors and TLRs. Increasing serum levels of IFN in lupus patients was described more than 40 years ago (79). Among the key immunological alterations in SLE, type I IFNs and related signaling pathways have been shown to play pivotal roles in disease pathogenesis (80,81). Overexpression of type I IFNs can cease tolerance and induce autoimmune diseases via increased expression of major histocompatibility complex I (MHC I) molecules (82,83), which enhances the cross-presentation of exogenous antigens. Likewise, the expression of other molecules related to the immune response promoted by IFN include MHC II, CD40, CD80, and CD86 in addition to chemokines and related cognate receptors, such as chemokine (C-X-C motif) ligand 10 and CXC chemokine receptor 3 (84). In response to IFN stimulation, DCs mature and transform into active antigenpresenting cells (APCs). Potent APCs induce the differentiation of naive CD4 + T cells, promote the development of CD8 + memory T cells and differentiation of Teffs, and suppress the functions of Tregs, which can collectively lead to the expansion of autoreactive T cells (85). The function of B cells is also impacted by type I IFNs, which is specifically manifested by extended survival and activation, leading to enhanced antibody production (86), ultimately resulting in the development of autoimmune diseases. Under normal physiological conditions, immune cells spontaneously and negatively regulate TLR signaling through various mechanisms, so as to avoid abnormal activation and to maintain immunological balance (87). Extracellular miRNAs were shown to act as cell-to-cell regulators through a nonconventional mechanism of scilicet interactions with innate immune RNA receptors, such as TLR 7 and 8 (88,89). MiR-146a is a negative regulator of TLR signaling (90), which can be induced by various stimuli, such as lipopolysaccharides, imiquimod R837, type A CpG oligonucleotides, and type I IFNs (51). Hou et al. (91) used bioinformatics tools to demonstrate that mature miR-146a reduced expression of multiple components in the type I IFN signaling cascades, including interleukin-1 receptor-associated kinase 1, tumor necrosis factor receptor-associated factor 6, IFN regulatory factor 5, and STAT1, thereby directly attenuating downstream activation of type I IFNs. Therefore, it appears that miR-146a deficiency is a causal factor that contributes to abnormal activation of the type I IFN pathway in SLE. Moreover, the coordinated activation of the type I IFN pathway was notably reduced after the introduction of miR-146a into the PBMCs of SLE patients. Collectively, these results suggested that exogeneous regulation of miR-146a is a promising therapeutic strategy for SLE. In 2010, Wang et al. (92) reported a positive association between miR-155 and IFN-a. MiR-155 feedback induced by viral infection promotes type I IFN signaling by targeting suppressor of cytokine signaling 1, a canonical negative regulator of type I IFN signaling, and mediates the enhancing effect of miR-155 on type I IFN-mediated antiviral responses. However, further studies are needed to clarify the role of miR-155 in type I IFN signaling and to investigate any possible correlations with SLE. Otherwise, the results of a luciferase reporter assay using Rat-1 fibroblasts stably expressing miR-181b revealed that miR-181b directly and negatively regulated IFN-a (93). When regulated upon activation, normal T cell express and secrete CCL5 (RANTES) which is a key chemokine for T cell recruitment to inflammatory tissues, and active expression is known to enhance levels and detrimental effects of inflammatory factors in arthritis, nephritis, and a myriad of other inflammatory disorders (94). Downregulated expression of miR-125a in SLE patients contributes to blunting the negative regulation of RANTES expression by targeting Kruppel-like factor 13 in active T cells. Hence, miR-125a could potentially serve as a therapeutic target for the treatment of SLE via regulation of inflammatory chemokine production (95). CONCLUSIONS AND PERSPECTIVES SLE is a heterogeneous chronic inflammatory autoimmune disorder, which is characterized by aberrant activation of lymphocytes, auto-antibodies, and inflammatory cytokine production (1). The highly heterogeneous nature of SLE has hampered a comprehensive understanding of the etiology of the disease in terms of both the underlying pathogenic processes and manifestations. Although researchers have spent a great deal of effort to figure it out, it is clear that there is still much to understand regarding the intricate network of SLE. MiRNAs are a family of small noncoding RNA molecules that provide quantitative regulation of genes at the post-transcriptional level by targeting mRNA translation or degradation (3). This summary of the current research results has shown that various, ubiquitous and functional miRNAs are involved in most immunological events leading to SLE. The upregulation of let-7c contributes to proinflammatory features of DCs in SLE (23); MiR-7 is associated with the overactivation of B cells and subsequent autoantibody production (37); MiR-21 up-regulation contributes to T cell hyperactivity (51); MiR-34a is associated with defection of Tregs and immune tolerance breakdown (65); Transfection of miR-98 alleviates the increased production of inflammatory cytokines (96). Different miRNAs can target the same mRNA, and one miRNA can target multiple mRNAs. Table 1 and Figure 4 list miRNAs that are preliminarily reported as being affiliated with immunoregulation in SLE. More extensive and in-depth studies on miRNAs are ongoing and the miRNAs involved in the pathogenesis of SLE are not limited to those described here. Based on the results of current studies, the exact relationship between miRNAs and the pathogenesis of SLE cannot be concluded arbitrarily. Although dysregulation of miRNAs has been reported to be involved in most important events in the A B C FIGURE 2 | (A) MiRNAs in dysfunctional antigen presentation by DCs. In the DC-specific absence of Blimp1, an increase in let-7 miRNA results in a broad spectrum of proinflammatory DC phenotype, mediated in part through suppression of SOCS1 expression. The CD40 expression was significantly upregulated with a negative correlation to the miR-155 in SLE primary target SHIP-1 expression. MiRNA-150 inhibited the expression of TREM-1 which amplify the function of TLR4. (B) miRNAs in aberrant activation of CD4 + T cells. MiR-21 contributed to the aberrant phenotype of T cells through interaction with PDCD4 or indirectly inhibiting DNMT1 expression through targeting RASGRP1. miRNAs such as miR-126, miR-29b and miR-148a can directly inhibit DNMT1 expression by targeting the protein coding region. These processes result in the overexpression of autoimmune-associated methylation-sensitive genes, which contribute to the autoreactivity and overstimulation of CD4 + T cells in SLE. miR-142-3p specifically targets the SLAM family, while miR-142-5p targets the 3'-UTR of SAP. Thus, decreased miR-142-3p/5p expression contributes to the up-regulation of CD84 and IL-10/SAP, resulting in the increased T cell function and IgG production in co-cultured B cells. Aberrant expression of miR-145 and miR-224 can promote T cell activation-induced cellular apoptosis and SLE-associated nephritis by overexpression of STAT1 and underexpression of API5. (C) miRNAs in functional inhibitory of Treg cells. The release of IL-6 or TNF-a can increase the expression levels of miR-34a, which can attenuate Foxp3 expression by targeting its 3ʹ-UTR. MiR-142-5p positively regulates intracellular levels of cAMP to maintain the suppressive function of Treg cells. MiR-142-3p can restrict cAMP levels in CD4 + T cells, which compromises the inhibitory function of Treg cells. MiR-99a and miR150 could regulate the function of Treg cells by targeting mTOR. The expression levels of anti-inflammatory miRNAs, such as miR-19b, miR-146a, miR-142-5p, miR-124b and miR-422a, decrease, while the expression levels of pro-inflammatory miRNAs, such as miR-146a, miR-224, miR-29b, miR-31 and miR-150, increase. Dysregulated miRNAs disturb the normal biological procedure of immune response by swaying the expression of pivotal protein molecule (e.g. CD40, CD40L, IFN, BAFF) directly or indirectly. These processes lead to autoimmune and tissue damage with aberrantly activated T lymphocytes, over-activated B cells, autoantibody accumulation and abnormally increased inflammatory cytokines in SLE. IC, immune complex; TCR, T cell receptor. progression of SLE, there is no conclusive evidence that abnormal miRNA expression is a cause or merely a consequence of SLE. Many of current studies are limited to the analysis of how miRNA affects SLE at the cytological level. These studies demonstrated that abnormally expressed miRNAs in PBMCs could lead to lupus-like cellular phenotypes characterized by overexpression of TLRs (25,57,104) or costimulatory molecules (51,73), enhanced cell signal transduction pathways (23,59,65,69,70), increased inflammatory cytokines (66,67,91), which have been found in abundance in SLE. However, whether small miRNAs can be a cause for the development of this intricate disease in healthy individuals has not been demonstrated in the present study. Even so, the role of miRNAs in molecular regulatory network of SLE is fascinating and have suggested an exciting avenue to enrich our understanding of SLE. Investigations into miRNAs and related molecular mechanisms involved in SLE will help to clarify the pathogenesis of this complex disease and potentially facilitate the identification of new treatment modalities. MiRNA-based therapeutic agents have been developing for the treatment of a variety of diseases (15,105). Gene knockout and transfection of several miRNAs have been demonstrated to alleviate disease activity of SLE in mice (23,38). In addition, small-molecule drugs that target the biogenesis of miRNA-155 for the treatment of SLE have been recently discovered (106). However, these approaches have not yet reached the level of clinical application. Current studies have made significant progress to analysis how miRNA affects SLE at the cytological level, but there are still many efforts to be invested in how far miRNAs may have played a role and whether it has the potential to be developed into new therapeutic targets. Thus, it is still too early to make conclusions about the therapeutic effect of miRNAs on lupus. Despite the considerable therapeutic potential, the results of basic studies are still a long way from being translating to clinical care, thus further research in this challenging field is urgently needed.
8,650.6
2021-08-02T00:00:00.000
[ "Medicine", "Biology" ]
Erosion and abrasion-inhibiting in situ effect of the Euclea natalensis plant of African regions This study evaluated the effect of Euclea natalensis gel on the reduction of erosive wear with or without abrasion, in enamel and dentin. During two five-day experimental crossover phases, volunteers (n = 10) wore palatal devices containing human enamel and dentin blocks (E = 8 and D = 8). The gel was applied in a thin layer in the experimental group, and was not applied in the control group. In the intraoral phase, volunteers used the palatal appliance for 12 h before the gel treatment, and were instructed to start the erosive challenges 6 h after the gel application. Erosion was performed with Coca-Cola® (for 5 min) 4 times/day. The appliance was then put back into the mouth and was brushed after 30 minutes. After intraoral exposure, the appliances were removed and the specimens were analyzed using profilometry (mean ± SD, μm). The Euclea natalensis gel caused less wear in enamel in the experimental group (EROS = 12.86 ± 1.75 μm; EROS + ABRAS = 12.13 ± 2.12 μm) than in the control group (EROS = 14.12 ± 7.66 μm; EROS + ABRAS = 16.29 ± 10.72 μm); however, the groups did not differ from each other significantly. A statistically significant value was found for erosion and eros + abrasion in dentin (p = 0.001). Euclea natalensis may play a role in the prevention of dentin loss under mild erosive and abrasive conditions. A clinical trial is required to confirm these promising results in a clinical situation. Introduction Dental wear is a known multifactorial condition that may represent an association of erosion, abrasion, attrition and abfraction.Acidic foods are consumed worldwide; 1 however, their effects on the mouth are universally assumed to be harmless.Acidic beverages and foods can affect natural teeth, and chronic exposure often leads to the development of dental wear. 2 The most recognized cause of abrasion is brushing, in which not only the type of toothpaste and brush are responsible, but poor technique and excessive brushing force (after acid challenge) may also act as aggravating factors. Erosive and abrasive processes are frequently observed, and are often associated, because abrasion of the dental hard tissue is considerably aggravated by exposure to acids (erosion). 3Efforts have been made to elucidate how erosive/abrasive lesions may be prevented. 4everal strategies have been used to prevent dental erosion and/or abrasion, such as topical application of fluoride or calcium-phosphate formulations. 5Another option is the addition of calcium, phosphate, iron, ferrous sulphate, 4 titanium tetrafluoride, 6 and/or sodium hexametaphosphate to rinsing solutions or toothpastes. 7imilarly, investigations into the contribution of natural products, such as propolis, 8 neem 9 and green tea, 10 have been used in experimental formulations for the treatment of different oral diseases. 8,9erbs with medicinal properties are a useful and effective source of treatment for various disease processes. 11The Euclea natalensis plant has been used for oral hygiene, and also for treating some respiratory diseases in Africa, notably by the indigenous population of South Africa. 12The root bark is removed and the inside is chewed until it breaks up; afterwards, it is rubbed against the teeth and gums.The root of the Euclea natalensis is used to clean the teeth in certain regions of Africa. 12,13Fresh root samples of Euclea natalensis were tested against Streptococcus mutans, human saliva and periodontal pocket isolates, and it was found that both aerobic and anaerobic bacterial growth (Porphyromonas gingivalis, Prevotella intermedia and Treponema denticola) was suppressed in all instances. 13,14These plant extracts showed moderate cytotoxicity on the Vero cell line. 14tudies have demonstrated that the polyphenols contained in certain natural products, such as propolis 8 and green tea, 10,15 may have inhibitory properties against matrix metalloproteinases (MMP) -2 and -9, which could affect the remineralization of artificially demineralized dentin. 10,15The roots of the Euclea species are rich in naphthoquinones, which may help explain their therapeutic activity, since these compounds have fungicidal, antibacterial, insecticidal, phytotoxic, cytotoxic and anti-carcinogenic properties. 14These naphthoquinones may also have properties that protect against enamel and dentin demineralization, thus inhibiting metalloproteinases. 14,15n a previous in vitro study, Euclea natalensis promoted the reduction of dentin demineralization caused by acids, thus showing that this plant was effective in inhibiting dentin dissolution.However, an in situ evaluation of Euclea natalensis has not yet been conducted to determine the reduction of enamel and dentin demineralization, in situations of high levels of acid challenge.Thus, the aim of this in situ study was to investigate the effect of Euclea natalensis gel on the erosive wear of enamel and dentin. Methodology Ethical aspects and subjects This research was approved by the Research Ethics Committee of the Bauru School of Dentistry, University of São Paulo (Protocol no.010/2011).The study was conducted in full accordance with the Declaration of Helsinki.For subjects to be included in the study, they had to provide a written informed consent, be age 18 to 40 years, be in good health with no evidence of communicable diseases, have a stimulated whole saliva flow rate of 1.0 mL/min, and no evidence of active caries or periodontal disease.The exclusion criteria included presenting any medical condition that could be expected to interfere with the safety of the volunteer during the study period, being a smoker, having received topical application of agents with high fluoride concentration at least 2 weeks prior to the beginning of the study, and presenting systemic diseases such as xerostomia and gastro-esophagic disorders. 4,16he sample size was calculated based on the data from a pilot study on surface microhardness change.A sample size of 4 volunteers was necessary to provide an α-error of 5% and a power of 80% and a relevant difference of 10% between gel and control groups.Ten volunteers were included in each group, because the variability could be higher for cross-sectional hardness, and because of possible losses inherent to in situ studies. 17The ten healthy volunteers lived in the same fluoridated (0.70 mg F/L) area, presented adequate, stimulated salivary flow, and wore acrylic palatal appliances. Stimulated saliva was collected by chewing a piece of rubber band for 5 min and spitting out every 1 minutes. 4olunteers with a salivary flow under 1 mL/min were not included in the sample.The volunteers' mean salivary flow was 1.34 ± 1.12 mL/minutes. Experimental design This randomized in situ study was a single-blind trial conducted in two phases based on Sales-Peres et al., 4 designed to compare the effect of Euclea natalensis gel on enamel and dentine erosive wear.Ten subjects who met the inclusion criteria, as cited before, attended two phases of 5 days each, with a washout period of 7 days and in situ/ex vivo erosion.The volunteers wore acrylic palatal appliances each containing four enamel specimens and four dentin specimens of human teeth, randomly assigned to two rows and four columns.The specimens in the first row were submitted only to erosion, and those in the second row, to erosion plus abrasion. The factor under evaluation was treatment for dental erosion and dental erosion plus dental abrasion on two levels: non gel treatment (control group -CG) and treatment with Euclea natalensis (experimental group -EG), and dental substrate treatment on two levels: human enamel and dentin (subgroups E and D, respectively). 4,17The sequence of treatments was designed to make the in situ protocol easier to be followed by the volunteers.In the first phase, the specimens were subjected to gel treatment (EG); in the second phase, the specimens were not subjected to gel treatment (CG).The phases were also randomized.Half of the volunteers started the study in the first phase, and the other half, in the second phase.After the end of each phase, the volunteers crossed over to the other phase. Sample preparation The enamel and dentin specimens (4 × 4 × 3 mm) were prepared from freshly extracted impacted human third molars, stored in an aqueous 0.1% thymol solution for 30 days.All tooth surfaces were used for preparation of the specimens (crown and root for the enamel and dentin, respectively).The enamel surface of the slabs was ground flat with water-cooled carborundum discs (320, 600 and 1200 grades of Al 2 O 3 papers; Buehler, Lake Bluff, USA), and polished with diamond spray (1 mm; Buehler).The same procedure was used for the dentin surfaces, except for the 320 grade Al 2 O 3 papers. 4urface Knoop microhardness tests were performed (Knoop diamond, 50 g, 10 s for enamel and 25 g, 5 s for dentin; HMV-2000; Shimadzu Corp., Tokyo, Japan) to select 80 human enamel specimens with a hardness between 292.0 and 383.4 kPa•mm -2 , and 80 human dentin specimens with a hardness between 48.6 and 68.7 kPa•mm -2 , for a total of 160 specimens. Custom-made acrylic palatal appliances were made with 8 cavities (5 × 5 × 4 mm).Enamel and dentin specimens were fixed with wax in the recesses of each individual acrylic palatal appliance.The specimens (enamel and dentin) were randomly divided into rows (at least 1 cm apart from each other), and the conditions (EROS or EROS + ABRAS) were randomly divided into lines (at least 1 cm apart) for each volunteer.This split-mouth palatal appliance design has been previously used for testing erosion or erosion+abrasion. 4 Intraoral phase and treatment The Euclea natalensis gel was made from the dried roots of E. natalensis (under controlled parameters), and custom prepared by this research group in the Laboratory of Natural Products at the Universidade Federal de São Carlos -UFSCAR.The gel formulation included 10% Euclea natalensis extract, Carbopol ® 980, methylparaben, EDTA, and sodium hydroxide solution.The participants wore the appliances for 12 h before starting the erosive wear for each intraoral phase, to allow a salivary pellicle to form. 18The gels were then applied in a thin layer using a microbrush, left to remain 5 min and then carefully removed with cotton swabs. In the intraoral phase, volunteers were instructed to start the erosive wear procedure 6 h after the gel treatment.During the following 5 days, erosive and abrasive challenges were performed ex vivo 4 times/day (at 8 am, 12 pm, 4 pm, and 8 pm) after the main meals. 4In each challenge, the appliance was immersed in a cup containing 150 mL of a recently opened bottle of cola soft drink (pH 2.6; Coca-Cola Co. Spal, Porto Real, Brazil) for 5 min at 25°C. 4 After the erosive wear, the volunteers were instructed to take a small amount of the beverage in their mouth and re-insert the appliance into their mouth.After 30 min of erosive wear, the second two rows were brushed using a soft toothbrush and fluoride dentifrice (Colgate, 1100 ppm F, Colgate-Palmolive Co. ® , São Paulo, Brazil), for 30 seconds ex vivo 4 . The volunteers were instructed to wear the appliances continuously for 24 h, except during the main meals (4 times/day), when the appliance had to be stored in wet gauze.Seven days prior to the beginning and throughout the experimental phase, participants brushed their teeth with a fluoridated dentifrice.The subjects received written instructions and a schedule, and were extensively trained for all the procedures. Wear analysis The laboratory personnel were blinded to the experimental specimen groups.At the end of the in situ phase, the specimens were remounted on acrylic blocks, and the nail varnish covering the reference surfaces was carefully removed. Surface profiles of the specimens were obtained with a contact profilometer (Hommel Tester T 1000, Hommelwerke, VS, Schwenningen, Germany).Surface loss was determined as follows: the tape was removed and 5 profiles were recorded at exactly the same sites as those used for the baseline measurement.The profile scans were performed in the center of each specimen at 250 µm intervals 4,17 .The specimens were allocated to the treatments by stratified randomization, according to the mean surface microhardness.All the groups presented similar mean microhardness values (about 363 ± 21 kPa.mm - and 68 ± 7 kPa.mm - , for enamel and dentin, respectively). After completion of each experimental phase, the nail varnish was carefully removed from the disc surfaces using an acetone-soaked cotton pellet, prior to analysis of surface wear with a profilometer (Shimadzu, Tokyo, Japan).Five scans were performed on the entire enamel surface.The dentin specimens were kept wet until the analysis, to avoid shrinkage of the collagen fibrils.The values were averaged (μm), and the increment value (final values -initial values) was calculated for the enamel and dentin specimens, which were then submitted to statistical analysis (Figure ). Statistical analysis The assumptions of equality of variances and normal distribution of errors were analyzed by the Bartlett and Kolmogorov-Smirnov tests, respectively.Since the assumptions were satisfied, the data were analyzed by two-way analysis of variance (ANOVA).Tukey's test was conducted with STATISTICA version 10.0 (Stat-Soft, Tulsa, USA), and used to perform individual comparisons between the groups.The level of significant difference was set at 5%. Results Table 1 shows the mean wear (±SD) of the groups in the enamel, and Table 2, in the dentin investigated. There was no difference between the groups (experimental vs. control) in relation to the enamel (p > 0.05).However, there was one between them for erosive and abrasive challenges in the dentin (p < 0.05).The Euclea natalensis gel caused less wear in the enamel for the erosive (12.86 ± 1.75 µm) or erosive+abrasion (12.13 ± 2.12 µm) challenge in the experimental group, compared with the control group (14.12 ± 7.66 µm and 16.29 ± 10.72 µm, respectively, p > 0.05).The best preventive effect was observed for the dentin under the experimental conditions of erosive (5.81 ± 1.00 µm) or erosive+abrasion (6.16 ± 1.00 µm) challenges, which yielded a significantly lower mean wear than that observed for the control group (p < 0.05). Discussion In situ models are commonly adopted for assessing erosion and abrasion involving the use of oral devices. 19n the present in situ study, there was no significant difference between the experimental group using Euclea natalensis gel and the control group, in relation to the enamel (p > 0.05), and a significant difference in relation to dentin erosive wear (p < 0.05), which may be attributed to the difference in enamel and dentin composition, and which may interfere in the erosive process.In enamel, the initial stage of erosion is characterized by a softening of the surface, due in part to the demineralization of the surface.On the other hand, dentin erosion is first apparent at the interface between the inter-and peritubular dentin, and greater exposure time causes hollowing and funneling of the tubules.Ultimately, the peritubular dentin is completely dissolved. 20aliva performs specific functions to protect the tooth structure, by buffering the capacity of calcium, phosphate and fluoride supersaturation. 21he protective functions of saliva include dealing with the challenges of erosion: thinning and clearing of erosive substances in the oral cavity; inducing neutralization of acid buffering by bicarbonate ions; providing calcium, phosphate and fluoride; possibly promoting needed remineralization; and maintaining a supersaturated state on the tooth surface, since calcium and phosphate are present in saliva.Assessment of salivary flow required that one of the criteria for the volunteers to be included in this study was having normal salivary flow.Low salivary flow may contribute to the symptoms of erosion, since saliva and its components protect the teeth by neutralizing acidity. 16he salivary pellicle formed may serve as a diffusion barrier or semi-permeable membrane, preventing direct contact between the acids and the tooth surface, thereby preventing demineralization. 22,23ccordingly, it seemed to have some effect on the enamel, by reducing the acid challenge in both groups.Future studies could show the differences between the periods of applied gel formation and the benefits gained. A greater loss of enamel substance was observed in the control group, when erosion was associated with abrasion, although it was not statistically significant between the groups.This reinforces the hypothesis that the enamel brushing performed after 30 min should have been postponed for 1 hour, since there was a greater loss of erosion associated with abrasion.This is in agreement with the literature 24,25 , which has proposed that oral hygiene (brushing) should be performed after the intake of food to prevent against the salivary buffering of the acidic pH of food, considering that the proximity between the time of acidic food intake and regular tooth brushing are risk factors for dental erosion.When oral hygiene (brushing) is performed immediately after meals, one's saliva cannot rebalance the pH, which remains acidic, and which, together with the mechanical force performed during brushing, leads to a synergistic effect. 24,25tudies have shown that dentin demineralization occurs in the peritubular dentin 26,27 -18-20% contains organic material, and 90% is type I collagen.The presence of a collagen-rich dentin surface decreases the diffusion of acids in the tissue, and further exhibits buffering properties, which minimize dental erosion.The organic matrix of dentin can be degraded both mechanically and chemically, which may contribute to the further progression of dentin wear.Chemically, the organic matrix of dentin can be degraded by MMPs (mainly MMP 8, 9 and 2) that cause the breakdown of virtually all extracellular matrix molecules, including native and denatured collagen. 15n in situ study evaluated the effect of a solution containing 0.61% green tea extract, versus none, in reducing dentin wear by erosion followed by abrasion.The result demonstrated that the solution containing green tea showed a reduction in wear, compared with the control group, for both conditions.The authors concluded that the presence of polyphenols present in green tea may have an inhibitory effect on the MMPs present in the dentin matrix. 10A study using histochemical analysis revealed the presence of naphthoquinones and polyphenols (such as flavonoids and tannins) in the root of the Euclea natalensis. 12,28,29According to the literature presented, 28,29 the therapeutic effect of Euclea natalensis extract could be attributed to matrix metalloproteinases inhibition in the dentin.This may be justified, because the hardness analysis showed that there was a higher loss of hardness in the Euclea natalensis group, which, in turn, indicates the possibility of the collagen fiber structure having some flexibility. Condensed tannins (CT) or proanthocyanidin units consist of flavanol: flavan3-ols (catechins) and flavan3,4-diols (leucoanthocyanins), may contain 20 to 50 flavonoid units, have a complex structure, and are resistant to hydrolysis, but may be soluble in aqueous organic solvents, depending on their structure.In addition, they can precipitate proteins from the cells, and thus form a protective layer, 30 which could be one of the reasons for the beneficial effects of Euclea natalensis. Furthermore, several studies have shown that the polyphenolic compounds present in Euclea natalensis may have inhibited the adhesion of S. mutans to hydroxyapatite, while others discuss their effect in promoting enamel remineralization. 28,29Euclea natalensis extract alone may be an alternative product to protect oral health and prevent dental caries, 13 tooth wear and dentinal sensitivity. 31For this reason, the effect of the chemical properties of Euclea natalensis as modifying factors for dental erosion should be explored in future investigations. Conclusion Euclea natalensis may play a role in the prevention of dentin loss under erosive and abrasive conditions.A clinical trial is required to confirm these promising results in a clinical situation. Table 1 . SD: standard difference.No significant difference was found (p > 0.05).Wear (µm) of the enamel (mean ± SD), with or without application of the gel, under two experimental conditions. Table 2 . Wear (µm) of dentin (mean ± SD), with or without application of the gel, under two experimental conditions.
4,492.6
2016-06-14T00:00:00.000
[ "Materials Science", "Medicine" ]
on nhess-2020-411 The scope of the study falls into seismic monitoring of Alpine mass movements, which is currently an active research field. So I expect this study to be met with interest within the journal’s readership. The study is more on the technical side, which is OK given the journal scope, although I suggest some more discussion in terms of physical mechanisms as I outline below. Moreover, the details of the documented calculations are unclear and should be rewritten, especially since they constitute the paper core. The manuscript is concise, structured and easy to follow. However, the English contains numerous grammar mistakes and has to be revised before the paper can be published. Overall my I recommend major revisions. In their submission, Schimmel et al. test relationships (some of them empirical), which compare metrics of seismic measurements near a torrent to debris flow velocities and volumes. Similar calculations have been made in the past, but this study uses data from different torrents, which provides valuable insights for potential alarm and monitoring systems. The scope of the study falls into seismic monitoring of Alpine mass movements, which is currently an active research field. So I expect this study to be met with interest within the journal's readership. The study is more on the technical side, which is OK given the journal scope, although I suggest some more discussion in terms of physical mechanisms as I outline below. Moreover, the details of the documented calculations are unclear and should be rewritten, especially since they constitute the paper core. The manuscript is concise, structured and easy to follow. However, the English contains numerous grammar mistakes and has to be revised before the paper can be published. Overall my I recommend major revisions. MAIN COMMENTS The velocity calculations are not well described. Which three different sliding windows do the authors refer to? What is the relation between minimum and maximum amplitudes? A ratio? How can the number of samples be equal to some distance? Which distance? Distance in which unit? What is a "significant signal shape"? In its current state, it is not possible for a reader to use the explanations to reproduce the calculations. I may have missed this, but how are the ground-truth debris flow volumes calculated, which the seismically-derived values are compared to? I was surprised that the authors do not discuss Schimmel et al. (2018), who use seismic and infrasound data to calculate discharge and estimate debris flow volumes. Is the current technique an improvement compared to this previously suggested one? Except for a small part of the discussion, the authors give no explanations on the physics behind debris flow seismicity. The cited papers by Lai et al. (2018) and Farin et al. (2019) make specific predictions between seismic signature and debris flow velocities, grain size distributions and other parameters. Even if the authors do not want to dive into details, they should use these theoretical assertions to offer explanations for their observed volume scaling. In several parts of the manuscript, the authors refer to the turbulent flow front. They need to provide evidence that their flow fronts were indeed turbulent and that this explains their observed signals. Some video or still footage could serve this purpose. Alternatively, I would expect that boulders in the flow front cause a distinct seismic signature compared to the flow tail. In a recent paper (Zhen et al., 2020, in GRL), we showed how the flow front's seismic signature is dominated by ground impacts of the largest boulders. Finally, Figures 8 and 9 should include error bars or at least some short discussion on uncertainties should be offered. OTHER COMMENTS Line 18: "feasible" should be deleted, as it is implied by "effective". Lines 24ff: What are the physical concepts these velocity estimates? In several instances, the authors use the word "magnitude". If this is synonymous with "volume", then use the latter, only. Lines 35ff: Here it seems that the authors argue that mass could be estimated with the Coviello et al. (2019) approach, but volume is poorly constrained. The difference between the two is the factor density. Why is this so poorly constrained? Lines 40ff and elsewhere: Avoid 1-sentence paragraphs. 2 Methods: It would be interesting to see rough numbers of debris flows per year for the different sites. Line 113: this peak discharge seems rather high. Line 126: Velocity measurements around 2500 s in Figure 6 do not seem "consistent" as asserted in the text. Lines 142-143: "can be an useful tool to analyze the flow behavior" This statement is trivial. Line 148: "permitting to avoid wrong correlation results" unclear Line 152: It is not clear how longer distances offer better resolution (resolution should be lower …?). Line 155: "so that the cross-correlation offers useful results" You should be more specific here. Line 159: "determine problems for the cross-correlation analysis" is unclear. Line 164: I do not understand how the authors arrive at granularity here. This should be explained. Line 169 and elsewhere: Is "process" synonymous with "debris flow"? If so, only use the latter. Lines 166-168: This sentence needs a reference. Line 174: "velocity measured by the radar is often lower ..." needs a reference. Lines 184-185: I am not sure that the volume estimate would always come in too late. It should be OK if the measurements were made high up in the catchment. Line 193 and elsewhere: "sediment concentration" Do you mean "grain size distribution"? Line 193: "calculation of the magnitude" of what? Lines 200-201: "among the different methods deployed and the different catchments" be more specific. Lines 203-204: "but still further research on different ..." this is an unnecessarily generic statement. Why exactly is more research needed? Lines 206-209: I suggest discussing Zhen et al. (2020) in the context of this sentence earlier in the manuscript. FIGURES Captions of Figures 5, 6 and 7: Rewrite so that site name appears in the first sentence of each caption and so that it is clear which "two geophones" are meant.
1,398.6
2021-02-12T00:00:00.000
[ "Geology" ]
Antioxidant Activities and Mechanisms of Tomentosin in Human Keratinocytes Tomentosin, one of natural sesquiterpene lactones sourced from Inula viscosa L., exerts therapeutic effects in various cell types. Here, we investigated the antioxidant activities and the underlying action mechanisms of tomentosin in HaCaT cells (a human keratinocyte cell line). Specifically, we examined the involvement of tomentosin in aryl hydrocarbon receptor (AhR) and nuclear factor erythroid 2-related factor 2 (Nrf2) signaling pathways. Treatment with tomentosin for up to 60 min triggered the production of reactive oxygen species (ROS), whereas treatment for 4 h or longer decreased ROS production. Tomentosin treatment also induced the nuclear translocation of Nrf2 and upregulated the expression of Nrf2 and its target genes. These data indicate that tomentosin induces ROS production at an early stage which activates the Nrf2 pathway by disrupting the Nrf2–Keap1 complex. However, at a later stage, ROS levels were reduced by tomentosin-induced upregulation of antioxidant genes. In addition, tomentosin induced the phosphorylation of mitogen-activated protein kinases (MAPKs) including p38 MAPK and c-Jun N-terminal kinase (JNK). SB203580 (a p38 MAPK inhibitor) and SP600125 (a JNK inhibitor) attenuated the tomentosin-induced phosphorylation of Nrf2, suggesting that JNK and p38 MAPK signaling pathways can contribute to the tomentosin-induced Nrf2 activation through phosphorylation of Nrf2. Furthermore, N-acetyl-L-cysteine (NAC) treatment blocked both tomentosin-induced production of ROS and the nuclear translocation of Nrf2. These data suggest that tomentosin-induced Nrf2 signaling is mediated both by tomentosin-induced ROS production and the activation of p38 MAPK and JNK. Moreover, tomentosin inhibited the AhR signaling pathway, as evidenced by the suppression of xenobiotic-response element (XRE) reporter activity and the translocation of AhR into nucleus induced by urban pollutants, especially benzo[a]pyrene. These findings suggest that tomentosin can ameliorate skin damage induced by environmental pollutants. Introduction Reactive oxygen species (ROS) are produced intracellularly during cellular metabolism, mainly in the mitochondria. ROS formation can occur when the cells are exposed to cellular or extracellular stress caused by agents such as xenobiotics, cytokines, ultraviolet (UV) light, and environmental pollutants [1][2][3]. Oxidative stress caused by the accumulation of ROS that outweighs the antioxidant capacity of the cells damages the cells and intracellular macromolecules such as proteins, lipids, and DNA [4]. Moreover, oxidative stress is involved in various diseases such as vitiligo, aging, diabetes, and cancer [5]. In contrast, ROS functions as a second messenger in various signaling cascades. ROS levels below the cellular tolerance threshold activate ROS signaling pathways that regulate ROS homeostasis and prevent cell damage [6]. Nuclear factor erythroid 2-related factor 2 (Nrf2) plays a major role in sensing and eliminating oxidative stress in cells. Nrf2 is a transcription factor that interacts with the antioxidant response element (ARE) and regulates the expression of phase II detoxifying and antioxidant enzymes including NAD(P)H-quinone oxidoreductase 1 (NQO1) and heme oxygenase-1 (HO-1) [7]. Under basal conditions, Nrf2 forms a complex with two repressor proteins, namely Cullin 3 (Cul3) and Kelch-like ECH-associated protein 1 (Keap1) in the cytoplasm. However, under oxidative stress conditions, the modification of cysteine residues in Keap1 induces a conformational change in the Keap1 protein which results in the dissociation of Nrf2 from the Nrf2-Keap1-Cul3 complex. Free Nrf2 translocates to the nucleus and heterodimerizes with the small Maf protein. This heterodimer induces the expression of Nrf2 target genes by binding to the ARE region in the promoters of these genes [8]. It has been known that the nuclear translocation and degradation of Nrf2 are modulated by the post-translational modification of Nrf2, especially phosphorylation [9]. The phosphorylation of Nrf2 at Ser-40 by protein kinase C (PKC) leads to the dissociation of the Nrf2-Keap1 complex and prompts the nuclear translocation of Nrf2 [10]. Additionally, mitogen-activated protein kinases (MAPKs) and the phosphatidylinositol 3-kinase (PI3K/AKT) pathway are also known contribute to the phosphorylation and activation of Nrf2 [9]. Benzo[a]pyrene (B[a]P), one of the polycyclic aromatic hydrocarbons, is a commonly occurring environmental pollutant and is classified as a human carcinogen [11]. It is also known to be a component of particulate matter. B[a]P induces the production of proinflammatory cytokines and ROS through the aryl hydrocarbon receptor (AhR) signaling pathway [12]. B[a]P binds to AhR in the cytoplasm and induces translocation of AhR into the nucleus. AhR that is activated by B[a]P forms a complex with AhR nuclear translocator (ARNT) in the nucleus. The complex subsequently interacts with the xenobiotic response element (XRE) and upregulates the expression of its target gene, cytochrome P450 1A1 (CYP1A1) [11]. Tomentosin, a natural sesquiterpene lactone, is one of main components of several aromatic medicinal species such as Inula viscosa (L.) Aiton [13]. It has been used as a therapeutic agent because it possesses pharmacological activity and exhibits anti-cancer, anti-bacterial, and anti-inflammatory effects [14][15][16][17]. Although the effects of tomentosin have been actively examined in cancer cells, there are no studies on the protective effects of tomentosin in skin tissue. Therefore, in this study, we investigated the antioxidant activities of tomentosin and its mechanisms of action in the HaCaT human keratinocyte cell line. Assay for Cell Viability The cytotoxicity of tomentosin in HaCaT cells was evaluated using Cell Counting Kit-8 (CCK-8, CK04-11, Dojindo, Japan). HaCaT cells were seeded in 12-well plates and incubated in DMEM in the presence of different concentrations of tomentosin for 24 h. After treatment, CCK-8 (4 µL/well) was added into the wells, and the plates were incubated at 37 • C for 2 h. Then, the cell culture media was transferred to a 96-well plate and the absorbance was measured at 450 nm by a microplate reader (Synergy HTX Multi-Mode Reader, Biotek, VT, USA). Results were verified by repeating the experiment four times. ARE and XRE Luciferase Reporter Assay and β-Galactosidase Assay HaCaT cells were seeded in 6-well plates and incubated in DMEM at 37 • C overnight. The cells were transiently co-transfected with 1 µg xenobiotic response element (XRE) (Stratagene, La Jolla, CA, USA) or 1 µg antioxidant response element (ARE)-driven luciferase reporter plasmid (Addgene, Watertown, MA, USA) and 1 µg β-galactosidase plasmid. A total of 5 µg of polyethylenimine (PEI) (23966-2, Polysciences, Inc., Warrington, PA, USA) was used for performing the transfections. After 4 h, the medium was replaced with fresh medium for stabilizing the cells. The transfected cells were incubated with various concentrations of tomentosin in the presence or absence of 3 µM B[a]P or 100 µg/mL UPM (National Institute of Standards and Technology, NIST-1648a, Sigma-Aldrich, St. Louis, MO, USA) for 24 h. The cells were collected with phosphate-buffered saline (PBS) and centrifuged at 16,200× g for 5 min. The centrifuged cells were lysed with Reporter Lysis Buffer (E3971, Promega, Madison, WI, USA). The lysates were centrifuged at 12,000× g for 3 min at 4 • C, and the supernatant was transferred to 96-well plates. The luciferase and β-galactosidase activities of the supernatants were measured following the manufacturer's instructions (Promega Corporation). The XRE and ARE promoter activities were expressed as ratios of firefly luciferase activity to β-galactosidase activity. Results were verified by repeating the experiment four times. Dichlorofluorescin Diacetate (DCFDA) Cellular ROS Detection Assay ROS production was quantitatively determined both by fluorescence microscopy and by using the DCFDA-cellular ROS detection assay kit (ab113851, Abcam, Cambridge, UK). Cells were plated in 60 mm dishes and 96-well opaque wall plates. The cells were washed three times using PBS and subjected to staining with 20 µM DCFDA in PBS for 20 min at 37 • C under dark conditions. After staining, the cells were washed with PBS again and incubated with 55 µM tert-butyl hydroperoxide (TBHP, 458139, Sigma-Aldrich, St. Louis, MO, USA) solution (positive control). Tomentosin treatment was performed for 24 h before DCFDA incubation or for up to 60 min after DCFDA incubation. DCFDA fluorescence signals were detected at an excitation wavelength of 485 nm and an emission wavelength of 535 nm. The change in fluorescence was expressed as a percentage of the control after background subtraction. Results were verified by repeating the experiment four times. Diphenyl-1-picrylhydrazyl (DPPH) Radical Scavenging Assay The free radical scavenging activity of tomentosin was quantitated using 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. Different concentrations of tomentosin were added separately to 0.15 mM DPPH (D9132, Sigma-Aldrich, St. Louis, MO, USA) and incubated for 10 min at room temperature. The absorbance of this mixture was then measured at 517 nm. Then, 100 µM ascorbic acid was introduced as the positive control. Results were verified by repeating the experiment four times. The percent radical scavenging activity was calculated using the following equation: % Radical scavenging activity = [(Absorbance of control − Absorbance of sample)/Absorbance of control] × 100 Real-Time RT-PCR Analysis of mRNA Levels Total RNA was extracted from cells by QIAzol lysis reagent (79306, Qiagen, Hilden, Germany) following the manufacturer's protocols and maintained at −70 • C until use. cDNA was made from total RNA (2 µg) using TOPscript™RT Drymix (RT200, Enzynomics, Daejeon, Korea) following the manufacturer's protocols. Real-time RT-PCR was conducted using the QuantiSpeed SYBR NO-ROX kit (QS105-10, Phile Korea, Seoul, Korea). Endogenous glyceraldehyde 3-phosphate dehydrogenase (GAPDH) (accession number: NM_001256799) was introduced for data normalization. The mRNA levels of the target genes were normalized to the levels observed in controls. Results were confirmed by performing the experiment four times: triplicate samples were included in each experiment. Measurement of Protein Levels Using Western Blot Analysis HaCaT cells were seeded in 60 mm dishes and treated as per the experimental plan. Protein samples were then prepared from the treated cells. For protein extraction, the cells were collected and centrifuged at 15,000 rpm for 50 min at 4 • C. Cell pellets were lysed using RIPA buffer (150 mM NaCl, 1% sodium deoxycholate, 25 mM Tris-HCl (pH 7.6), 1% NP-40, 0.1% SDS (9806s, CST, Danvers, MA, USA)) including phosphatase and protease inhibitor cocktail (5872s, CST, Danvers, MA, USA). Protein samples were quantified with a BCA assay kit (Thermo Fisher Scientific, Waltham, MA, USA) After that, the protein samples were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis, and transferred to polyvinylidene difluoride membranes (162-0177, Bio-Rad, CA, USA). The polyvinylidene difluoride membranes were blocked with BSA and exposed to antibodies. Finally, the proteins were measured using ECL Western Blotting Reagents (170-5061, Bio-Rad, CA, USA). Results were verified by repeating the experiment four times. Preparation of the Nuclear and Cytoplasmic Cell Fractions Cytoplasmic and nuclear and cell extracts were fractionated by NE-PER Nuclear and Cytoplasmic Extraction reagents (78833, Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer's instructions, and then subjected to Western blot analysis for analyzing target proteins. Detection of Target Proteins Using Immunocytochemistry The cells were fixed with 4% paraformaldehyde in PBS for 15 min and permeabilized using 0.01% Tween-20 and 0.1% Triton X-100 for 20 min at room temperature. After blocking the cells with 3% BSA in PBS, the cells were treated with anti-Nrf2 (1:1000; ab76026, Abcam) or anti-AHR (1:500, sc-133088, CST, Danvers, MA, USA) antibodies. The fixed cells were then washed three times and treated with Flamma-594 secondary or Flamma-488 secondary antibodies (Abcam, Cambridge, UK). Subsequently, after counterstaining with Hoechst33342, cells were mounted on slide glasses and observed under a LSM 700 laser scanning confocal microscope (Zeiss, Jena, Germany) with a C-Apochromat 20× objective. Images of the cells were captured uniformly at a set laser power and the mean intensity of the fluorescence signals was determined. The data were analyzed with ZEN 2012 Blue (Zeiss, Jena, Germany) under similar processing parameters. Results were verified by repeating the experiment four times. Enzyme-Linked Immunosorbent Assay (ELISA) for Target Proteins An interleukin (IL)-8 ELISA Kit (ab46032, Abcam, Cambridge, UK) was used to quantitate IL-8 levels following the manufacturer's protocol. The measurement of absorbance was analyzed using a Labsystems Multiskan MS Analyser (Thermo Bio-Analysis Japan, Tokyo, Japan). The results were repeated in three independent experiments. Statistical Analysis for Data Significance All results are expressed as mean ± SD and were confirmed in at least three independent experiments. Statistical analysis of the data was conducted using Student's t-test for independent samples. A p-value < 0.05 was considered statistically significant. Tomentosin Exerts Antioxidant Activity in HaCaT Cells The tomentosin treatment concentrations were selected based on the cytotoxicity of tomentosin as assessed by the CCK-8 assay. Tomentosin showed no cytotoxicity at 10 µM in HaCaT cells, based on the observation that there was no significant alteration in cell viability ( Figure 1B). In addition, tomentosin was reported to increase intracellular ROS levels in various cancer cell types [8,18]. Therefore, we conducted a DCFDA cellular ROS detection assay to investigate the involvement of tomentosin in ROS production in HaCaT cells (human normal keratinocyte cell line). Cells were incubated with different concentrations of tomentosin (1, 5, and 10 µM) for 24 h. As shown in Figure 1C,D, basal ROS production was reduced in tomentosin-treated HaCaT cells in a concentration-dependent manner. Tert-butyl hydroperoxide (TBHP) was introduced as a positive control. In addition, we examined the time point at which tomentosin reduces ROS generation in cells. Cells were incubated with 10 µM tomentosin for different time periods. ROS levels were measured by imaging assays ( Figure 1E) and by fluorescence intensity assays ( Figure 1F). As shown in Figure 1E,F, tomentosin treatment for >6 h was able to reduce intracellular ROS levels in a time-dependent manner. Furthermore, TBHP-induced ROS production was attenuated by tomentosin treatment (Figure 1G,H). These data indicate that tomentosin has antioxidant activity in a time-and concentration-dependent manner. Tomentosin Activates the Nrf2 Signaling Pathway In previous experiments, tomentosin lowered ROS levels in cells, and thus exhibited antioxidant activity. Therefore, we performed a DPPH assay to examine whether tomentosin possesses radical-scavenging activity. However, contrary to our expectation, tomentosin did not reduce DPPH free radicals in vitro (Figure 2A). Next, we examined the effect of tomentosin on Nrf2 signaling, which is one of the main antioxidant signaling pathways in HaCaT cells. In this experiment, tomentosin significantly increased ARE reporter activity in a concentration-dependent manner ( Figure 2B). We further examined the expression of Nrf2 and its target genes HO-1 and NQO1. Tomentosin increased the protein levels of Nrf2 and its target genes including HO-1 and NQO1 in a concentration-dependent manner ( Figure 2C). The mRNA levels of HO-1 and NQO1 were also increased by tomentosin treatment ( Figure 2D). On the other hand, tomentosin reduced Keap1 protein levels; this protein is involved in mediating the degradation of Nrf2 in the cytoplasm ( Figure 2C). In addition, Western blot and immunocytochemistry analyses showed that tomentosin increased the nuclear translocation of the Nrf2 protein ( Figure 2E,F). These results indicate that tomentosin exerts antioxidant activity through activation of the Nrf2 signaling pathway. After 20 min, tert-butyl hydroperoxide (TBHP) was added to a concentration of 55 µM and the cells were incubated for 1 h before being subjected to fluorescence microscopy (G) (Original magnification = 10×) or fluorescence intensity analysis (H). ROS production was assessed using DCFDA. The data are shown as the mean ± SD of triplicates. * p < 0.05 vs. untreated group, ** p < 0.01 vs. untreated group, oo p < 0.05 vs. TBHP-treated group. TBHP was introduced as a positive control. TBHP, tert-Butyl hydroperoxide. Tomentosin Induces Nrf2 Activation by Phosphorylating p38 MAPK and JNK in HaCaT Cells To examine the molecular mechanisms of action underlying Nrf2 activation by tomentosin, we investigated the relationship between phosphorylation of MAPKs and Nrf2 using Western blot analysis. Tomentosin treatment increased the phosphorylation levels of Nrf2 ( Figure 3A). Additionally, tomentosin induced the phosphorylation of p38 MAPK and JNK in a timedependent manner but did not show this effect on p42/44 MAPK ( Figure 3B). To investigate the involvement of p38 MAPK and JNK in the phosphorylation of Nrf2, we performed Western blot analysis using SB203580 (a p38 MAPK inhibitor) and SP600125 (a JNK inhibitor) and found that treatment with these inhibitors significantly reduced the phosphorylation levels of p38 MAPK and JNK, respectively ( Figure 3C). Moreover, as shown in Figure 3D,E, SB203580 and SP600125 attenuated the tomentosin-induced phosphorylation of Nrf2. These data indicate that tomentosin induces Nrf2 phosphorylation through activation of p38 MAPK and JNK. Tomentosin-Induced ROS Production Mediates Nrf2 Activation Previous reports have shown that Nrf2 can be activated by electrophilic stress and mild oxidative stress [19,20]. This suggests that some types of antioxidants can exert their antioxidant activity through the ROS-dependent activation of Nrf2 signaling. Therefore, to elucidate the involvement of ROS in the tomentosin-induced activation of Nrf2 signaling, we investigated the effect of tomentosin treatment on ROS production. Although tomentosin reduced ROS levels when the treatment duration was 6-12 h ( Figure 1E,F), it triggered the production of ROS when the treatment duration was shorter than 60 min ( Figure 4A,B). This effect of tomentosin on ROS production was also significantly attenuated by N-acetyl-Lcysteine (NAC), a strong antioxidant ( Figure 4C). In addition, to examine whether tomentosininduced ROS activates Nrf2, we analyzed the nuclear translocation of Nrf2 in tomentosintreated cells in the presence of NAC. While Nrf2 protein was increased in the nuclear fraction of tomentosin-treated cells compared to untreated cells, NAC attenuated the Nrf2 nuclear translocation induced by tomentosin ( Figure 4D). These findings indicate that tomentosin activates Nrf2 signaling by inducing low levels of ROS production. The results are shown as the mean ± SD of triplicates. * p < 0.05 vs. untreated group. ## p < 0.01 vs. tomentosin-treated group. (D) Cells were treated with 10 µM tomentosin in the presence of 10 mM NAC for 24 h. The treated cells were collected, and the nuclear and cytoplasmic cellular fractions were analyzed by Western blotting. α-tubulin and lamin B1 were introduced as controls for cytoplasmic and nuclear protein extracts, respectively. NE, nuclear extracts; CE, cytosolic extracts. [21,22]. We investigated the effects of tomentosin on B[a]P-induced AhR signaling by analyzing the following: the activity of a xenobioticresponse element (XRE)-luciferase reporter, AhR nuclear translocation, and cytochrome P450 1A1 (CYP1A1) expression. Tomentosin treatment reduced B[a]P-induced XRE-luciferase activity in a concentration-dependent manner ( Figure 5A). In this experiment, we also used UPM, which is known as a ligand of AhR. Similar to B[a]P, tomentosin reduced UPM-induced XRE-luciferase activity ( Figure 5B). In addition, tomentosin inhibited B[a]P-activated AhR nuclear translocation, as evidenced by Western blotting data ( Figure 5C) and immunocytochemistry ( Figure 5D). Furthermore, while protein and mRNA levels of CYP1A1 were increased by B[a]P treatment, tomentosin reduced the B[a]P-induced upregulation of CYP1A1 expression ( Figure 5E,F). These results indicate that tomentosin also regulates B[a]P-induced AhR signaling. It is well known that B[a]P induces the production of ROS and IL-8 (a proinflammatory cytokine) through AhR activity in human keratinocytes [12,23]. In addition, we found that tomentosin suppresses B[a]P-induced AhR signaling. Therefore, we investigated the effect of tomentosin on the production of IL-8 and ROS in HaCaT cells. As shown in Figure 6A Discussion The skin is the outermost organ of our body and plays as a barrier protecting the body from external stress. The skin is composed of three layers, namely the epidermis, dermis, and subcutis. The epidermis is the upper layer of the skin [24] and is in direct contact with environmental stressors such as air pollutants [25]. As keratinocytes constitute 95% of the epidermis in human skin, the condition of keratinocytes is crucial for maintaining the barrier function of the skin [26]. In this study, we demonstrated a novel protective effect of tomentosin on HaCaT, normal human keratinocyte cells. In this present study, we found that tomentosin shows its antioxidant activity by activating Nrf2-mediated signaling in human epidermal keratinocytes. Tomentosin upregulated the expression of Nrf2 and its target genes such as HO-1 and NQO1. In addition, B[a]Pinduced AhR signaling was suppressed by tomentosin treatment. Tomentosin inhibited B[a]P-activated nuclear translocation of AhR and decreased B[a]P-induced upregulation of CYP1A1 expression. These data indicate that tomentosin exerts its protective activity towards cells by activating Nrf2 signaling and inhibiting AhR signaling. Under basal conditions, Nrf2 forms a complex with Keap1 and Cul3 in the cytoplasm. Keap1 mediates the ubiquitination and proteasomal degradation of Nrf2 [27]. However, when cells are exposed to mild oxidative stress, the cysteine residues of Keap1 are oxidized, leading to a conformational change in the complex [28]. At this stage, Nrf2 dissociates from the complex and is activated and phosphorylated by intracellular molecules such as Akt and MAPKs [9]. Finally, the free Nrf2 moves to the nucleus and initiates the transcription of its target genes [29]. In this study, the expression of the Nrf2 repressor protein Keap1 was downregulated by treatment with tomentosin. In addition, tomentosin-induced Nrf2 phosphorylation was suppressed by SP600125 (a JNK MAPK inhibitor) and SB203580 (a p38 MAPK inhibitor). These data indicate that tomentosin induces Nrf2 signaling by phosphorylating Nrf2 through the activation of JNK and p38 MAPK. Further, our results also indicate that tomentosin-induced downregulation of Keap-1 contributes to Nrf2 activation. Interestingly, in this study, we found that treatment with tomentosin for up to 60 min increased ROS production. However, treatment for ≥6 h (6-12 h) resulted in a reduction in ROS levels. To examine the role of tomentosin-induced ROS production in the activation of Nrf2 signaling, we examined the nuclear translocation of Nrf2 using N-acetyl-L -cysteine (NAC), which is a strong antioxidant. Co-treatment with NAC attenuated the tomentosininduced nuclear translocation of Nrf2. This result suggests that ROS produced due to short-term tomentosin treatment contributes to Nrf2 activation by inducing the nuclear translocation of Nrf2. AhR is a ligand-dependent transcription factor that controls the expression of xenobioticmetabolizing enzymes [30]. It is activated in response to external environmental stressors such as polycyclic aromatic hydrocarbons (PAHs), UPM, B[a]P, and dioxins [31][32][33]. When a ligand interacts with AhR, the AhR-ligand complex moves to the nucleus and binds to the XRE region. The activated AhR then increases the expression of xenobiotic-metabolizing enzymes such as cytochrome P450 1A1 (CYP1A1) [30]. AhR signaling has also been known to cause cell and tissue damage [34,35]. In this study, we demonstrated that tomentosin treatment inhibited the B[a]P-induced AhR signaling pathway. Tomentosin suppressed the following B[a]P-induced effects: AhR nuclear translocation, CYP1A1 expression, and the production of ROS and IL-8. These results suggest that tomentosin ameliorates cell damage induced by B[a]P through the inhibition of the AhR signaling pathway. Oxidative stress has been linked to various types of diseases [5,36,37]. Previous studies have reported on several natural compounds with antioxidant and cell-protective activities [38,39]. For example, cynaropicrin induces the translocation of AhR into the nucleus (by binding to it) and activates the Nrf2 signaling pathway; this protects cells from ROS-mediated oxidative damage [40]. In addition, while resorcinol inhibits AhR, it activates the Nrf2 pathway and exerts antioxidant activity in an AhR-independent manner [41]. In this study, we found that tomentosin showed a mechanism of action similar to that of resorcinol. However, in contrast to resorcinol, tomentosin induced Nrf2 signaling by producing ROS and activating p38 MAPK and JNK signaling. As discussed above, we demonstrated that tomentosin exerted antioxidant and cellprotective effects in human epidermal keratinocytes. These effects were exerted by activating Nrf2 signaling (by inducing the production of low levels of ROS), and p38 MAPK and JNK pathways, and by suppressing AhR signaling (by blocking the nuclear translocation of AhR). Our findings suggest that tomentosin and tomentosin-containing aromatic medicinal plant species could be used to ameliorate skin disorders caused by oxidative stress or air pollutants such as B[a]P. Conclusions In this study, we demonstrated that tomentosin upregulates the Nrf2 antioxidant pathway through ROS production as well as by the activation of JNK and p38 MAPK. In addition, tomentosin suppresses B[a]P-activated AhR signaling and inhibits the production of IL-8 and ROS induced by B[a]P. The mechanism of tomentosin in the AHR/Nrf2-mediated signaling is presented in Figure 7. These data suggest that tomentosin could be applied therapeutically for oxidative stress-related skin disorders. Data Availability Statement: The data presented in this study are available in the article.
5,502
2022-05-01T00:00:00.000
[ "Biology", "Chemistry" ]
Characterization and Manipulation of Carbon Precursor Species during Plasma Enhanced Chemical Vapor Deposition of Graphene To develop a synthesis technique providing enhanced control of graphene film quality and uniformity, a systematic characterization and manipulation of hydrocarbon precursors generated during plasma enhanced chemical vapor deposition of graphene is presented. Remote ionization of acetylene is observed to generate a variety of neutral and ionized hydrocarbon precursors, while in situ manipulation of the size and reactivity of carbon species permitted to interact with the growth catalyst enables control of the resultant graphene morphology. Selective screening of high energy hydrocarbon ions coupled with a multistage bias growth regime results in the production of 90% few-to-monolayer graphene on 50 nm Ni/Cu alloy catalysts at 500 °C. Additionally, synthesis with low power secondary ionization processes is performed and reveals further control during the growth, enabling a 50% reduction in average defect densities throughout the film. Mass spectrometry and UV-Vis spectroscopy monitoring of the reaction environment in conjunction with Raman characterization of the synthesized graphene films facilitates correlation of the carbon species permitted to reach the catalyst surface to the ultimate quality, layer number, and uniformity of the graphene film. These findings reveal a robust technique to control graphene synthesis pathways during plasma enhanced chemical vapor deposition. Introduction Currently, most high quality, large area graphene is produced via chemical vapor deposition (CVD) techniques with gaseous hydrocarbon precursors, micrometer scale Cu as a catalyst and support, and synthesis temperatures in excess of 900 • C [1][2][3][4][5]. Due to the relative thickness and composition of the catalyst and elevated synthesis temperatures, these growths require a transfer process to the target substrate which limits incorporation of graphene to applications with only planar geometries. In recent years, significant research efforts have focused on reducing required synthesis temperatures and catalyst thicknesses with an ultimate goal of developing techniques for direct synthesis on substrates other than transition metal catalysts [6][7][8][9]. The research endeavors for the development of these synthesis techniques aim at eliminating damage and geometry related constraints associated with a transfer process while enabling direct incorporation of graphene in a variety of fields; from the semiconductor industry as an ultrathin diffusion barrier to the aerospace industry as lightweight strengthening and protective coatings [10][11][12][13]. Researchers have identified three promising avenues towards this goal: the application of plasma enhanced CVD (PECVD) processes, the use of bimetal catalysts, and the Nanomaterials 2020, 10, 2235 2 of 11 choice of hydrocarbon precursor. Despite these efforts, control of graphene layer number and film connectivity remains a significant challenge as reaction temperatures and catalyst thicknesses are reduced [6]. For example, PECVD techniques relying on ionization of the carbon precursor to reduce the energy required for graphene synthesis have yielded quality graphene at 600 • C on predominantly copper Cu/Ni alloys, however incomplete dehydrogenation and multilayer formation is observed upon further temperature reduction due to the reduced catalytic activity of the primarily Cu substrate [14]. Similarly, transition metals with partially filled D orbitals (Fe, Co, Ni) have been identified as suitable candidates for CVD synthesis temperature reduction due to their increased ability for carbon ion stabilization. However, the increased carbon solubility in these metals leads to uncontrollable layer formation upon cooling [15][16][17][18][19]. In attempts to alleviate this issue, Ni has been combined with less catalytic metals, such as Au, to suppress the formation of multilayer films. These catalysts show the potential to enable the formation of few-to-monolayer graphene films at 450 • C, following a 600 • C anneal of the catalyst prior to growth [20]. Though these results are promising, they require catalyst thicknesses of 500 nm or greater to minimize multilayer formation, in addition to the aforementioned catalyst pretreatments at elevated temperatures. In addition to the research efforts mentioned above, numerous gaseous carbon precursors, including methane, ethane, and propane have been investigated. It was found that larger carbon precursor molecules allow graphene synthesis at reduced temperatures due to increased ion stability and reduced energy requirements for dehydrogenation [21,22]. This trend has led to the development of CVD techniques employing solid phase and liquid phase carbon sources to further reduce required reaction temperatures for graphene synthesis through a significant increase in carbon precursor size [23][24][25]. Graphene synthesis at 300 • C has been performed with benzene and poly(methyl methacrylate) (PMMA) on Cu substrates, however 1000 • C pretreatment of the catalyst is required prior to the synthesis [24,26]. These results suggest the importance of the synergistic relationships among the carbon precursor molecule size, the ionization state, the target substrate reactivity, and the carbon solubility and thickness of the catalyst. Although graphene formation on low-reactivity catalysts has been carried out through ionization of the hydrocarbon precursor, and graphene growth on high-reactivity catalysts has been achieved through both bimetal catalysts and increased hydrocarbon precursor sizes, techniques for in situ manipulation of carbon precursors tailored to the specific target substrate have not been thoroughly investigated. This report demonstrates a unique methodology to gain in-depth understanding of the synergistic relationships between critical growth parameters. This investigation was carried out using a PECVD synthesis technique in which the size and ionization state of carbon precursor molecules reaching the growth catalyst is manipulated to reduce the rate of nucleation and absorption into the catalyst bulk, resulting in the formation of a continuous few-to-monolayer graphene film at 500 • C. This is achieved through control of the inlet between a remote inductively coupled plasma (ICP) location and the catalyst location that enables both ion screening and secondary capacitively coupled plasma (CCP) generation. This precursor screening technique is demonstrated on a 50 nm thick Ni/Cu catalyst (2 wt% Cu), representing a 10-fold reduction in catalyst thickness compared to previously published results, while eliminating the elevated temperature pre-growth anneal required by previous reports [20]. Characterization of the generated plasma species is performed via UV-Vis inspection, while mass spectrometer (MS) characterization of the growth chamber coupled with current monitoring at the catalyst location enables identification of species reaching the catalyst. It is observed that the layer number and defect concentrations can be controlled via ion screening processes, while a secondary ionization procedure leads to further reduction in both defect concentrations and multilayer portions of the film. Reactor Configuration and Capabilities All experimental results were obtained in custom-built reactor as shown in Figure 1a, with remote ICP location and configurable inlet along the path from the plasma to the catalyst. A positive or negative Nanomaterials 2020, 10, 2235 3 of 11 voltage can be applied to the inlet plates independently to screen ions and/or generate a secondary CCP. Current monitoring at the sample stage enables characterization of the inlet plate effects on charged species reaching the catalyst. Monitoring of the growth chamber via mass spectrometry enables identification of neutral species reaching the catalyst location through analysis of fragments generated upon ionization at the detector. Ionized species generated in the plasma are not expected to reach the MS which is separated from the main chamber by a leak valve. This is verified by a lack of signal detected when the ionizing component of the MS is turned off in the presence of plasma at the ICP or CCP location. As depicted in Figure 1b, the background composition of the chamber at 1 × 10 −7 torr is primarily H 2 O and CO 2 . Introduction of C 2 H 2 and H 2 results in an expected increase in 1 and 2 carbon species while ignition of a 20W plasma at the remote ICP location results in the generation of 3 and 4 carbon species, in agreement with previously reported characterizations of acetylene plasmas [27][28][29]. Figure 1c displays the UV-Vis spectrum collected at the ICP and CCP locations, confirming the generation of these larger hydrocarbon molecules with the presence of a plasma. Characterization of gaseous species generated both at the remote ICP location and those that reach the mass spectrometer reveal that there is an increase in ionization events (Figure 2a), and a reduction in neutral species reaching the MS detector ( Figure 2b) with increasing remote plasma power. However, plasma power variation alone does not enable selection for carbon precursor size as increasing power increases the generation of both large and small species. Additionally, current measurements at the catalyst location during remote plasma operation confirm that primarily positive ionic species are reaching the catalyst and that the application of a negative bias to a reaction chamber inlet plate effectively blocks these ions from reaching the catalyst (Figure 2c). This characterization indicates that, while increasing remote plasma power alone does not enable significant selectivity for the size of species generated, the average size of carbon precursors reaching the catalyst can be increased through remote plasma operation coupled with screening of high energy ions through the application of a negative bias at a chamber inlet plate. The novel design of the reaction chamber enables characterization and manipulation of gaseous species during graphene synthesis, revealing the synergistic relationship between growth parameters. Reaction Chamber Characterization UV-Vis characterization was performed through spectrum collection (USB200+, Ocean Insight, Rochester, NY, USA) of ICP and CCP signals through isolated viewports, above the ICP and on the main chamber for CCP. Stage current characterization was performed through Pico ammeter (Keithley 485, Tektronix INC., Beaverton, OR, USA) monitoring of the sample stage. Mass spectrometry (PrismaPro QMG 250 M2, Pfeiffer Vacuum, Nasuhua, NH, USA) was collected in a secondary chamber with differential pumping to maintain 1 × 10 −6 torr which is connected to the main chamber through a leak valve. Catalyst Deposition and Graphene Synthesis 50 nm Ni/Cu catalysts were deposited on Si/SiO 2 wafers through magnetron sputtering (AXXIS, Kurt J. Lesker Company, Jefferson Hills, PA, USA) of 48 nm Ni followed by 2 nm Cu without breaking vacuum. This catalyst composition and thickness was identified through preliminary experimentation to minimize catalyst dewetting during synthesis, observed as thickness is reduced, and enable graphene formation, difficult with increased Cu concentrations, without significant multilayer formation, common with reduced Cu concentrations [30,31]. Graphene synthesis was performed in the custom PECVD chamber initiated by chamber evacuation to base pressure of 1 × 10 −7 torr followed by heating to 500 • C under 15 sccm of H 2 , resulting in a chamber pressure of 50 mTorr. To promote cleaning and alloying of the catalyst, the 1 cm × 1 cm sample was held at 500 • C for 2 min under H 2 flow prior to introduction of the hydrocarbon precursor. Graphene growth was initiated by introduction of C 2 H 2 at 0.1 sccm and ignition of a 20W ICP plasma for 1 min. Screening bias and secondary CCP were applied according to the desired synthesis regime through a −40 V bias application (PSFX, XP Glassman, High Bridge, NJ, USA) to the first inlet plate or CCP generation at 2.5W (Bertan 205A, Spellman HVEC, Hauppauge, NY, USA) with a negative bias applied to the second plate. After preliminary experimentation, a −40 V screening bias was identified as optimal to stop all detection of current at the sample location without plasma ignition or arcing at the screening location during the synthesis processes. Following completion of the synthesis regime, ICP, CCP, and screening bias powers were set to zero, as well as the C 2 H 2 flow rate. Finally, the sample was allowed to cool under 15 sccm H 2 until 150 • C over approximately 15 min before venting the chamber to atmosphere. events (Figure 2a), and a reduction in neutral species reaching the MS detector ( Figure 2b) with increasing remote plasma power. However, plasma power variation alone does not enable selection for carbon precursor size as increasing power increases the generation of both large and small species. Additionally, current measurements at the catalyst location during remote plasma operation confirm that primarily positive ionic species are reaching the catalyst and that the application of a negative bias to a reaction chamber inlet plate effectively blocks these ions from reaching the catalyst ( Figure 2c). This characterization indicates that, while increasing remote plasma power alone does not enable significant selectivity for the size of species generated, the average size of carbon precursors reaching the catalyst can be increased through remote plasma operation coupled with screening of high energy ions through the application of a negative bias at a chamber inlet plate. The novel design of the reaction chamber enables characterization and manipulation of gaseous species during graphene synthesis, revealing the synergistic relationship between growth parameters. Graphene Transfer and Characterization Graphene was transferred from the catalyst through spin coating (WS-650, Laurell Technologies, North Wales, PA, USA) 300 nm polymethyl methacrylate (PMMA) support and baking in air at 150 • C for 5 min. The sample was submerged in 0.5 M FeCl 3 to etch both Ni and Cu until the graphene/PMMA floated to the surface. Following 5 rinses for 1 min each in DI water, the graphene with PMMA support was transferred to fresh Si/SiO 2 and PMMA was removed in acetone. Raman characterization was performed on a Jobin Yvon HR800 (HORIBA, Kyoto, Japan) with 532 nm laser excitation and mapping acquisition capabilities through a motorized sample stage. Raman map characterization and spectrum averaging were performed through in-house software, written in R, to peak fit D, G, and 2D bands for each spectrum collected and generate 2D plots. Reaction Chamber Characterization UV-Vis characterization was performed through spectrum collection (USB200+, Ocean Insight, Rochester, NY, USA) of ICP and CCP signals through isolated viewports, above the ICP and on the main chamber for CCP. Stage current characterization was performed through Pico ammeter Results and Discussion To identify the effects of in situ precursor manipulation on achievable graphene quality, all reported synthesis is performed as described in Section 2.3 with only variations of the plasma generation location and energized state of the screening plate. Following transfer of the graphene films, Raman mapping is performed to characterize quality and uniformity with ratios of the intensity of D, G, and 2D bands as well as the full width at half maximum (FWHM) of the 2D peak to determine the layer number and defect density of the films. Fewer layers are present with increasing I 2D/G , and defect densities increase with increasing I D/G . While pristine monolayer graphene displays a nearly undetectable I D/G and an I 2D/G ≥ 2, when defects are present monolayer graphene is identified by an I 2D/G > 1 and FWHM 2D < 100 cm −1 [32,33]. To categorize areas of multilayer and monolayer graphene in these samples, 2D maps of I 2D/G are presented with color scales fixed between 1 and 2, with black areas, I 2D/G ≤ 1, representing multilayer portions of the film, white areas, I 2D/G ≥ 2, representing low defect density monolayer portions of the film, and orange areas, 1 < I 2D/G < 2, representing few-to-monolayer portions of the film. Figure 3a,b display 100 µm 2 I 2D/G Raman maps, with accompanying average Raman spectra for the mapped areas, of samples synthesized with and without an applied screening bias at the inlet plate, respectively. It is observed that with the application of a screening bias, both average layer number and areas of multilayer (areas with I 2D/G ≤ 1 indicated by black portions of the Raman map) are reduced compared to the unscreened case by 62%. The reduction of multilayer portions of the film under the applied bias condition is attributed to the screening of high energy ions that are more readily dehydrogenated and adsorbed into the catalyst bulk, leading to rapid saturation and multilayer formation upon cooling. While these ions are screened by the applied bias, the neutral molecules, including 3 and 4 carbon species (m/z 36-39, 47-50) generated in the remote plasma, are permitted to reach the catalyst location and participate in graphene formation at the catalyst surface. Though a significant reduction in multilayer portions is observed, the graphene film remains highly defective. The films (Figure 3a,b) have an average I D/G of 1.2, with an increased background between the D and G peaks indicative of remaining sp 3 hybridization through C-H bonds [34,35]. Nanomaterials 2020, 10, x FOR PEER REVIEW 6 of 12 monolayer portions of the film. Figure 3a,b display 100 μm 2 I2D/G Raman maps, with accompanying average Raman spectra for the mapped areas, of samples synthesized with and without an applied screening bias at the inlet plate, respectively. It is observed that with the application of a screening bias, both average layer number and areas of multilayer (areas with I2D/G ≤ 1 indicated by black portions of the Raman map) are reduced compared to the unscreened case by 62%. The reduction of multilayer portions of the film under the applied bias condition is attributed to the screening of high energy ions that are more readily dehydrogenated and adsorbed into the catalyst bulk, leading to rapid saturation and multilayer formation upon cooling. While these ions are screened by the applied bias, the neutral molecules, including 3 and 4 carbon species (m/z 36-39, 47-50) generated in the remote plasma, are permitted to reach the catalyst location and participate in graphene formation at the catalyst surface. Though a significant reduction in multilayer portions is observed, the graphene film remains highly defective. The films (Figure 3a,b) have an average ID/G of 1.2, with an increased background between the D and G peaks indicative of remaining sp 3 hybridization through C-H bonds [34,35]. Synthesis results under the biased plate condition indicate that to reduce the layer number and defect densities of the graphene films, both a reduction in nucleation density and an increase in dehydrogenation rates must be achieved. To characterize the capability of this ion screening technique toward achieving these goals, multistage growths were performed in which the screening bias was applied for a portion of the synthesis. Figure 4a,b display Raman maps and accompanying average Raman spectra from samples in which the bias was applied for the first or second half of the 1-min synthesis, respectively. The synthesis performed with a screening bias for the first 30 s of the growth (Figure 4a) displays a small increase in multilayer coverage when compared to the synthesis with bias application for the growth in its entirety (Figure 3a). This result indicates that the initial screening of high energy ions results in nucleation occurring primarily from neutral and larger carbon containing species and the removal of the screening bias allows high energy ions to reach the catalyst and continue both growth at the surface and saturation of the catalyst bulk. Conversely, the sample produced with a screening bias applied for the second 30 s (Figure 4b) displays a significant increase in multilayer formation indicating high rates of nucleation, growth, and absorption into the catalyst bulk during the initial 30 s where no screening bias is applied. Application of the screening bias during the final 30 s of the synthesis removes the ionized species responsible for dehydrogenation and film completion, resulting in increased multilayer formation. Further reduction in multilayer portions of the film and defect density (Figure 4c) is achieved through application of the bias for the first 30 s of the synthesis followed by removal of both the bias and the carbon precursor feed stock to the remote plasma location for the second half of the synthesis (Figure 4d). This results in reduced nucleation rates during the initial stage of the growth, associated with bias application, and, with the removal of both the bias and the carbon feedstock, increased rates of dehydrogenation without continued layer formation during the second half of the synthesis. This multistage ion screening synthesis technique enables production of continuous and predominantly few-to-monolayer, 91% I 2D/G > 1, graphene at 500 • C without requiring an increased temperature anneal. Further control over the reactivity of species reaching the catalyst location can be achieved through the generation of a low power, 2.5 W, secondary plasma after the ion screening location. Figure 5a shows a Raman map and average Raman spectrum of graphene produced during a 1-min synthesis with both a remote plasma and a secondary plasma, representing a significant reduction in average defect densities, from 1.4 to 0.7 I D/G , while increasing few-to-monolayer coverage, 95% I 2D/G > 1. MS characterization (Figure 5b) of the reaction environment reveals a reduction in 3 and 4 carbon species with the ignition of a secondary plasma while the concentration of 1 and 2 carbon species remains relatively unaffected. Additionally, UV-Vis monitoring of the secondary CCP (Figure 5c) reveals that primarily H ionization events occur when the remote ICP is present while both H and CH ionization events occur when only the secondary CCP is present (Figure 1c). These results, coupled with the detection of a current at the sample location upon ignition of the secondary CCP, indicate that 3 and 4 carbon species generated in the 20W ICP are not reaching the MS and may be the primary species ionized at the secondary CCP location prior to interacting with the catalyst. Comparing the Raman map under this two-plasma, ICP and CCP, condition (Figure 5a) to the map of the sample synthesized under a multistage bias condition (Figure 4c), an increased number but decreased size of multilayer islands is observed in the two-plasma case. We hypothesize that this phenomenon results from an increased nucleation rate associated with larger carbon precursors which are generated at the ICP location and ionized at the CCP location before reaching the catalyst. These larger ionized species are more likely to nucleate at the catalyst surface, resulting in the increased number of multilayer islands observed, but are less likely to be absorbed into the catalyst bulk, resulting in the overall increase in few-to-monolayer content of the film. While bias application alone screens high energy ions and a multistage bias synthesis condition reduces multilayer formation, this secondary ionization technique increases few-to-monolayer coverage to 95% through both increasing the reactivity of carbon precursors and reducing the rate of catalyst saturation. This phenomenon of controlling the concentration and ionization states of precursor molecules permitted to interact with the growth substrate has resulted in the significant increase in few-to-monolayer coverage in the secondary bias case. While the dependence on carbon species size and ionization state has been demonstrated, the specific roles of each ionized species within the larger groups, i.e., 3 carbon and 4 carbon species, will require in situ characterization of reactions occurring at the catalysts surface. Future work in this area should lead to improvements in targeting specific precursor species to intended substrates and continue to advance efforts toward graphene inclusion in a variety of fields. Nanomaterials 2020, 10, x FOR PEER REVIEW 8 of 12 (c) UV-Vis spectrum of CCP collected while ICP plasma generation is also occurring, indicating primarily H ionization. Note the reduction in CH and C2 ionization events compared to the CCP spectrum, Figure 1c, collected when no upstream ICP is present. This phenomenon of controlling the concentration and ionization states of precursor molecules permitted to interact with the growth substrate has resulted in the significant increase in few-tomonolayer coverage in the secondary bias case. While the dependence on carbon species size and ionization state has been demonstrated, the specific roles of each ionized species within the larger groups, i.e., 3 carbon and 4 carbon species, will require in situ characterization of reactions occurring at the catalysts surface. Future work in this area should lead to improvements in targeting specific precursor species to intended substrates and continue to advance efforts toward graphene inclusion in a variety of fields. Conclusions In summary, we have demonstrated graphene synthesis techniques utilizing in situ manipulation of carbon precursors generated during plasma enhanced chemical vapor deposition to achieve continuous graphene films at reduced temperatures on reduced catalyst thicknesses. This experimental approach has allowed us to gain an in-depth understanding of the correlation among the parameters investigated. Moreover, this synthesis technique, which is not represented in literature, enables the manipulation of nucleation density, layer number, and defect densities though control of carbon precursor sizes and ionization states. Screening bias application between a remote ionization location and the sample location facilitates targeting of larger neutral molecules while a secondary ionization event can increase the reactivity of these molecules. Our results demonstrate that by utilizing this technique a few-to-monolayer graphene (with average Raman D to G peak intensity ratio ID/G = 0.7) can be synthesized on 50 nm Ni/Cu thin film catalysts at 500 °C, without the . Graphene synthesis with both ICP and secondary CCP resulting in reduced layer number and defect density. Raman I 2D/G map, (a), indicating primarily monolayer formation (95% I 2D/G > 1) and accompanying average Raman spectrum displaying reduced defect densities compared to multistage synthesis results in Figure 4. (b) Mass spectrum depicting the change in hydrocarbon species present with the ignition of a secondary CCP. Note that the number of 3 and 4 carbon species is reduced with ignition of the secondary plasma while the number of 1 and 2 carbon species remains nearly constant. (c) UV-Vis spectrum of CCP collected while ICP plasma generation is also occurring, indicating primarily H ionization. Note the reduction in CH and C 2 ionization events compared to the CCP spectrum, Figure 1c, collected when no upstream ICP is present. Conclusions In summary, we have demonstrated graphene synthesis techniques utilizing in situ manipulation of carbon precursors generated during plasma enhanced chemical vapor deposition to achieve continuous graphene films at reduced temperatures on reduced catalyst thicknesses. This experimental approach has allowed us to gain an in-depth understanding of the correlation among the parameters investigated. Moreover, this synthesis technique, which is not represented in literature, enables the manipulation of nucleation density, layer number, and defect densities though control of carbon precursor sizes and ionization states. Screening bias application between a remote ionization location and the sample location facilitates targeting of larger neutral molecules while a secondary ionization event can increase the reactivity of these molecules. Our results demonstrate that by utilizing this technique a few-to-monolayer graphene (with average Raman D to G peak intensity ratio I D/G = 0.7) can be synthesized on 50 nm Ni/Cu thin film catalysts at 500 • C, without the need for any high temperature catalyst pretreatments. This technique represents not only an avenue for continued reduction to synthesis temperature and transition metal catalysts thickness requirements but reveals a novel method for active species control in broader PECVD synthesis techniques. Funding: This research is funded by the National Science Foundation (NSF) with an award No. 1711994 and the funds from the Oregon Metal Initiative. The participation of undergraduate students to this research is supported by NSF REU-Site Award No. 1560383.
6,214.6
2020-11-01T00:00:00.000
[ "Chemistry" ]
Design, synthesis and biological assessment of new 1-benzyl-4-((4-oxoquinazolin-3(4H)-yl)methyl) pyridin-1-ium derivatives (BOPs) as potential dual inhibitors of acetylcholinesterase and butyrylcholinesterase Alzheimer's disease (AD), is among the most growing neurodegenerative diseases, which is mainly caused by the acetylcholine neurotransmitter loss in the hippocampus and cortex. Emerging of the dual Acetylcholinesterase (AChE)/Butyrylcholinesterase (BuChE) inhibitors has increased for treating Alzheimer disease. In this study, we would like to report the design and synthesis of a new sequence of 1-benzyl-4-((4-oxoquinazolin-3(4H)-yl)methyl) pyridin-1-ium derivatives (BOPs) assessed as BuChE and AChE inhibitors. Ellman's approach was used for the evaluation of AChE and BuChE inhibitory activities. Moreover, docking research was conducted to predict the action mechanism. Among all synthesized compounds, 1-(3-bromobenzyl)-3-((4-oxoquinazolin-3(4H)-yl)methyl) pyridin-1-ium bromide (BOP-1) was found to be the most active compound with dual activity for inhibition of AChE (IC50 = 5.90 ± 0.07μM), and BuChE (IC50 = 6.76 ± 0.04μM) and 1-(4-chlorobenzyl)-3-((6,7-dimethoxy-4-oxoquinazolin-3(4H)-yl)methyl) pyridin-1-ium chloride (BOP-8) showed the highest AChE inhibitory activity (IC50s = 1.11 ± 0.09 μM). The synthesized compounds BOP-1 and BOP-8 could be proposed as valuable lead compounds for further drug discovery development against AD. Introduction Alzheimer's disease (AD) is a progressive neurodegenerative problem, chiefly prevalent in elder people [1]. The number of people suffering from Alzheimer is increasing significantly. Based on to the World Alzheimer Report (2019), about 50 million people were living with AD in 2019 worldwide, and it is estimated that this number will have risen to 152 million by 2050 [2]. Although the disease's molecular source is not still thoroughly perceived, novel therapeutic approaches to alleviate the pathophysiological symptoms of the disease have achieved great advancements in this field [3]. Among diverse hypotheses proposed for the mechanism of AD, cholinergic hypothesis plays a determining role in leading the scientists to discover viable strategies to overcome the disease [4,5,6]. Based on this theory, a dramatic decline in the acetylcholine neurotransmitter levels in the hippocampus and cortex is accountable for memory loss, learning impairments and cognitive dysfunction [6]. Butyrylcholinesterase (BuChE) and acetylcholineesterase (AChE) are the enzymes accountable for catalyzing the hydrolyzing acetylcholine and consequently generating the clinical manifestations of AD. Therefore, compounds with dual inhibitory effects on AChE and BuChE can be promising candidates for the treatment of AD [7,8]. Galantamine, donepezil and rivastigmine are three widely-available FDA-approved drugs for AD ( Figure 1) [9]. Among these drugs, Rivastigmine reveals dual inhibition against BuChE and AChE [10]. Donepezil, a piperidine-based reversible acetylcholinesterase inhibitor, is permitted for the mild-to-moderate AD treatment. In comparison to other AChEIs, donepezil reveals higher cognitive enhancement and more desirable Pharmacokinetic, pharmacodynamic and safety profile [11]. Thus, by designing and synthesizing novel multi-target chemicals with the aim of lowering the adverse effects and heightening the efficacy, management of AD will notably get easier [12]. In recent years, the quinazoline/quinazolinone ring scaffold has increasingly gained interest as a privileged structure in a variety of marketed drugs and broad varieties of biologically active compounds, such as anti-microbial, anti-cancer, neuroprotective, and also anti-AD agents, exhibiting inhibition of Aβ aggregation, butyrylcholinesterase (BuChE), and dual acetylcholinesterase (AChE), and have essential scavenging effects ( Figure 2) [13,14,15,16,17,18]. Benzyl pyridinium structure has also shown anti-ChE activity. For example, the docking studies, reveals the anti-ChE activity of this moiety through interactions with amino acid residues in the PAS and CAS of AChE and its effectiveness is reported in several SAR studies [19,20,21]. In light of the new investigations and following the previous findings, some new 1-benzyl-4-((4-oxoquinazolin-3(4H)-yl)methyl) pyridin-1-ium derivatives (BOPs) were designed and synthesized. This modification has been made on donepezil by a scaffold replacement in pursuit of finding potential multi-targeted compounds ( Figure 3). Thus, the synthesized compounds with different aryl pendants were evaluated as AChE and BuChE dual inhibitors. General chemistry Sigma Aldrich and Merck provided reagents, and they were utilized as provided without further purification. The 1 H nuclear magnetic resonance (NMR) spectra for all derivatives were documented by tetramethylsilane (TMS), which is the internal standard on a Bruker FT-500 MHz spectrometer. We presented coupling constants in Hertz (Hz) and expressed chemical shifts as δ (part per million) downfield from TMS as an internal standard (Supplementary). All NMR analyses were carried out at room temperature. Figure 4 indicates the target compounds' atom numbering employed for 1 H NMR data. Kofler hot stage was used for determining compounds' melting points. Thin-layer chromatography (TLC) on Merck pre-coated Silica Gel F254 plates was used for routine checking of product mixture and progress of the reaction. Synthesis 2.2.1. General process of synthesis of (pyridinylmethyl)quinazolin-4(3H)one derivations (9a, b and 12a, b) We added proper chloromethyl pyridine (1 equiv) to a mixture of quinazolin-4(3H)-one derivations (1 equiv) and excess of anhydrous potassium carbonate in 5 ml dry DMF, following by stirring the mix for 4 h under argon at 50 C. TLC was used for checking the reaction progress. We added 20 ml water and then cooled the mixture, extracting with ethyl acetate (3 Â 30 ml). Over Na 2 SO 4 , combined organic extracts were dried out, and the removal of the solvent was performed with reduced pressure. The resulting precipitated solid afforded in a good yield and was used in the next step without further purification. Starting from 3,4-dimethoxy-quinazolin-4(3H)-one (5) (1mmol, 0.20gr) and 3-chloromethyl pyridine hydrochloride (1mmol, 0.164gr) compound 9a was afforded in 71% yield, mp ¼ 142-148 ᵒC, 1 In order to set the derivatives' stock solutions, they were dissolved in the dimethyl sulfoxide (DMSO). Then, we diluted them in the absolute ethanol for obtaining 3 varying concentrations for assay. The experiments were done in triplicate for each concentration. The assay solution included 60 mL DTNB, 2mL of phosphate buffer (0.1M, pH ¼ 8), 20 mL of 5 IU/mL butryl cholinesterase solution, and 30 mL of inhibitor. Afterwards, the product was pre-incubated at 25 ᵒC for 10 min. Then, 20 mL of butyrylthiocholine iodide was added as the substrate to the 24 wells for starting the reaction. A Synergy HTX multimode plate reader was employed for recording the changes in the absorbance for 5 min at 412 nm for 5 min. For justifying non-enzymatic reaction, we conducted assays with a blank having all components except ButChE. Calculation of the IC 50 values was done graphically using Microsoft Excel 2019 from a concentration inhibition curve for the derivatives. Moreover, AChE took the same assay for obtaining the derivatives' anti-AChE activity. Docking studies Using www.rcsb.org, the receptor pdb text file AChE with donepezil bearing PDB code (1EVE) and BuChE (6QAA) were taken. Then, PMV 1.5.6 was used for removing water and the complex ligand. The ligand's atomic co-ordinates (BOP-1 and BOP-8) were drawn by Hyperchem 8.1.10. The addition of polar hydrogen and electric charges were performed in the AutoDock program. The AutoDock parameters for AChE docking were established as following: Grid Box size 60 Â 60 Â 60, with the center of 5.077-65.107-55.746 as x, y, and z, respectively, and spacing of 0.375 Å. For BuChE docking process a Grid Box size 30 Â 30 Â 30, with the center of 17.0-41.10-39.0 as x, y, and z, respectively, and spacing of 0.375 Å were used. We set other parameters as default. The ranking of the measured geometries was based on binding free energies, and for further analysis, the best positions were selected. The Discovery Studio software was used for conducting molecular visualization. Cholinesterase inhibition assay With modified Ellman's method, anticholinesterase activities of BOPs and donepezil hydrochloride as the reference compound were determined. As observed in Table 1, anticholinesterase activities of synthesized complexes given by IC 50 values are presented. According to Table 1, BOPs could be categorized into two group: 1) having methoxy group substituted on oxoquinazoline ring 2) without methoxy group on oxoquinazoline ring. For both groups, anti-BuChE and anti-AChE effects could be altered by the replacement of different benzyl halides linked to 3 or 4-(chloromethyl)pyridine moieties. In group-1 of tested compounds (BOP-1-5), BOP-1 having bromine atom on C-3 site of benzyl group showed the strongest AChE inhibitory effect (IC 50 ¼ 5.90 AE 0.07μM). Altering the site and substitution of chlorine atom on C-4 position led to the compound BOP-2 with lower AChE inhibitory activity (IC 50 ¼ 47.14 AE 0.48μM). Also, supplanting the chlorine atom by the fluorine on BOP-3 or CN on BOP-4 resulted in derivatives with very low activities. Although with the removal of this group on benzyl ring the AChE inhibitory raised for compound BOP-5 (IC 50 ¼ 41.21 AE 0.62μM), compound BOP-1 remained the best AChE inhibitor in this group of compounds. For BuChE inhibitory activities, compounds in group-1 revealed similar manner to the AChE inhibitory activity and compound BOP-3 with fluorine atom on C-4 showed better result in comparison to its AChE inhibition activity (AChE IC 50 > 100 μM, BuChE IC 50 ¼ 70.43 AE 0.41μM). For BOP-5 with no substitution on benzyl ring, BuChE inhibition reached the IC 50 of 12.63 AE 0.11μM, lower compared with AChE which enjoys an IC 50 of 41.21 AE 0.62μM. In group-1, the BuChE inhibitory activities were more than AChE inhibition except for BOP-1 with slightly better AChE activity (AChE IC 50 ¼ 5.90 AE 0.07μM, BuChE IC 50 ¼ 6.76 AE 0.04μM) that was the best compound in this group. In a second group containing methoxy substitution on oxoquinazolin ring (BOP-6-12), the best compound was BOP-8 with a chlorine atom on C-4 position which showed promising potency as AChE inhibitor (IC 50 ¼ 1.11 AE 0.09μM). However, BOP-8 had no inhibitory activity against BuChE, which indicate that this compound is a good selective AChE inhibitor. The replacement of chlorine with hydrogen, fluorine and CN in position C-4 caused depletion of inhibitory activity for AChE in compounds BOP-10, BOP-7 and BOP-9 with IC 50 ¼ 10.08 AE 0.28μM, IC 50 ¼ 21.92 AE 0.2μM and IC 50 > 100 μM, respectively. The BOP-6 with 3-Bromo benzyl ring revealed an intermediate potency among other compounds in this group (IC 50 ¼ 6.77 AE 0.24μM). For compounds BOP-6 to BOP-12 lower inhibitory effect on BuChE was observed (except BOP-6 and BOP-10 with intermediate IC 50 values of 8.77 AE 0.1μM, 18.84 AE 0.34μM respectively). Two of our compounds with 4-(methyl)pyridine moiety (BOP-11 and BOP-12) showed lower inhibitory activities both for AChE and BuChE in comparison with other compounds containing 3-(methyl)pyridine group. According to the observed results, group-1 had a higher activity for BuChE inhibition. On the other hand, group 2 with the substituted methoxy groups had better results for Screening of physicochemical characteristics The Lipinski's rule of five is usually used to estimate drug-likeness or decide if a compound with a certain biological activity has properties that would make it a potential active drug in humans. We calculated the pharmacokinetic profile and possible violations of the rule of five of the most active compounds BOP-1 and BOP-8 with Swiss ADME web-based tool [23] and presented in Table 2. As shown by the data calculated, both of compounds have suitable properties without violating the Lipinski's rule of five. Considering the fact that the majority of the biologically active complexes known as drug candidates have not more than one violation of the Lipinski's criteria. Both BOP-1 and BOP-8 pursued the criteria and, hence, they can be regarded as drug candidates. Docking studies Docking study was accompanied for investigating the binding state of the most active complexes BOP-8 and BOP-1 in the active position of BuChE (PDB:6QAA) and AChE (PDB: 1EVE), which was performed by using Autodock 4. As can be seen in Figure 8 and Figure 9, the orientation of compound BOP-1 and BOP-8 were studied in the active position of AChE. The results showed that BOP-1 and BOP-8 were strongly bound with the optimal conformation of AChE, and their binding energies reached -10.75 kcal*mol-1 and -10.26 kcal*mol-1. According to the interaction mode of BOP-1 (Figure 8), the nitrogen atom of quinazolinon aligned toward SER 122 residues via an H-bond interaction and Oxygen atom of carbonyl group interacted with Gly123 through another H-bond interaction. In case of compound BOP-8 as shown in Figure 9, nitrogen atom of quinazolinon aligned toward Tyr130 residues via an H-bond interaction and methoxy groups interacted with Gly123 and SER 124 through Hbond interactions. These tree key H-bond interactions demonstrated the high inhibitory potency of compound BOP-8 for AChE. For compound BOP-1 in complex with BuChE ( Figure 10), a conventional H-bond of carbonyl oxygen with HIS-438 and also pi-pi stacking interactions for phenyl ring of quinazolinon and benzyl ring with several active site residues was observed. We also conducted docking to study interactions of compound BOP-8 as a selective inhibitor of AChE in the active site of BuChE. As shown in Figure 11, there were no major interactions with active site residues in comparison to AChE binding pattern especially lack of H-bonds and this is in line with in vitro assay for compound BOP-8 as a weak BuChE inhibitor. Conclusion We synthesized and assessed novel oxoquinazolin-benzyl pyridinuim hybrids as strong inhibitors of cholinesterase. According to research findings, it was proved that moderate to good AChEI and BuChEI activity of synthesized BOPs, among them compound BOP-1 showed the best anti-AChE and anti-BuChE dual effect with IC 50 values of 5.90 AE 0.07 and 6.76 AE 0.04μM, respectively. Also, compound BOP-8 possesses selective and potent AChEI activity with IC 50 values of 1.11 AE 0.09 μM and no inhibition on BuChE, which confirmed with docking studies. Furthermore, the pharmacokinetic properties of BOP-1 and BOP-8 were supported by the Lipinski rules of 5. In general, the present research showed a novel strong dual inhibitor of AChE and BuChE (BOP-1) and a new selective potent anti-AChE agent (BOP-8) with a potential therapeutic advantage and further research value for the treatment of AD. Kouros Divsalar, Ali Asadipour: Analyzed and interpreted the data. Alireza Foroumadi: Conceived and designed the experiments. Funding statement This work was supported by Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences (99-33). Data availability statement Data included in article/supplementary material/referenced in article.
3,246.6
2021-04-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Twistor form of massive 6D superparticle The massive six-dimensional (6D) superparticle with manifest (n,0) supersymmetry is shown to have a supertwistor formulation in which its"hidden"(0,n) supersymmetry is also manifest. The mass-shell constraint is replaced by Spin(5) spin-shell constraints which imply that the quantum superparticle has zero superspin, for n=1 it propagates the 6D Proca supermultiplet. Introduction Twistors are spinors of (a cover of) the conformal group. They arise in formulations of conformally invariant theories that make the conformal invariance manifest. For spacetime dimensions D = 3, 4, 6 (which we abbreviate to 3D etc.) there is a natural superconformal extension of the conformal group [1] and hence a natural extension of twistors to supertwistors [2], which can be used to construct manifestly superconformally invariant theories in these dimensions; recent field theory examples can be found in [3,4]. In the context of particle mechanics, the superconformal invariance of the massless superparticle becomes manifest in a phase-space formulation in which the phase-space coordinates are the components of a supertwistor [5,6,7,8]. Surprisingly, twistor methods are not limited to massless particle mechanics, although a doubling of the twistor phase space is needed to allow for a non-zero mass [9]. One way to understand how it is that twistors can be relevant to massive particles is to consider a massive particle as a massless particle in a higher dimension. For example, by starting with the supertwistor form of the massless 6D superparticle action, a doublesupertwistor form of the action for a particular 4D massive superparticle is found upon imposing appropriate momentum constraints [10]. A review of this idea, with extensions and other applications of it, can be found in [11]. There is no analogous way to obtain a double-supertwistor formulation of the massive 6D superparticle. Although the standard massive 6D superparticle action can be found by imposing momentum constraints on the massless 10D superparticle, there is no adequate supertwistor formulation of the latter that could be used to find the supertwistor formulation of the former; see e.g. [12,13] for a discussion of the difficulties. Nevertheless, a direct construction of a double-supertwistor formulation of the massive 6D superparticle is possible, as we show here. This construction could provide further insight into the massless 10D case, which is of relevance to superstring theory [14]. Apart from this possible link to superstrings, one may ask what advantages twistors have when there is no conformal invariance to be made manifest. One answer to this question emerged from the results of [11] for the simplest N = 1 massive 4D superparticle, which has a second "hidden" supersymmetry implying an equivalence to the N = 2 massive "BPS superparticle" (which is directly related to the massless 6D superparticle) [15]. This equivalence becomes manifest when the twistor formulations of the two actions are compared: they are identical! It was further shown in [15] that this equivalence is a general feature of massive superparticle actions (in a Minkowski vacuum background) in any spacetime dimension because non-BPS massive superparticle actions are just versions of a BPS massive superparticle action for which the latter's fermionic gauge invariance ("kappasymmetry") has been (partially or fully) gauge-fixed. The gauge fixing preserves manifest Lorentz invariance but obscures some of the supersymmetries. For example, one can gauge-fix the 6D massive BPS superparticle action with manifest (n, n) supersymmetry to arrive at a much simpler 6D massive superparticle action with no fermionic gauge invariance; this action still has (n, n) supersymmetry, of course, but only the (n, 0) supersymmetry is now manifest. This result greatly simplifies our present task because it allows us to focus without loss of generality on massive superparticle actions without fermionic gauge invariances. For example, the simplest such action for a superparticle of mass m is, in phasespace form, where Θ is a complex chiral anticommuting 6D spacetime spinor, and e(t) is the Lagrange multiplier for the mass-shell constraint (we assume a Minkowski spacetime metric with "mostly plus" signature and coordinates {X m ; m = 0, 1, . . . , 5}). This action has manifest (1, 0) supersymmetry but also, for m = 0, a "hidden" (0, 1) supersymmetry [15]. It is in canonical Hamiltonian form when m = 0 because in this case it defines an invertible closed (orthosymplectic) two-form on the phase superspace with coordinates (X, P, Θ). As we shall see, the full (1, 1) supersymmetry of the action (1.1) becomes manifest in its supertwistor form. This involves a pair of 6D supertwistors, of the same chirality, on which there is a natural action of USp(4) ∼ = Spin (5), which emerges as a gauge invariance of the supertwistor action, with corresponding "spin-shell' constraints. Coincidentally, Spin(5) is also the 6D rotation group, which is Wigner's "little group" for massive particles in 6D. In reality, this is no coincidence but it is not immediately obvious what the connection is between space rotations and the "internal" Spin(5) gauge group. This issue was addressed for the massive 4D superparticle in [11], but here we present a simpler resolution of it, in the 6D context, by consideration of the supersymmetric extension of the Pauli-Lubanski (PL) tensors. Pauli-Lubanski tensors are generalizations of the 4D Pauli-Lubanski "spin-vector"; they are translation invariant tensors constructed from the Poincaré Noether charges {P, J }. In 6D the PL tensors are In the context of classical particle mechanics, the Poincaré Noether charges are tensors on phase space. When these charges are expressed in terms of the usual phase space coordinates for a massve point particle, the PL tensors are identically zero. This is no longer true in the double-twistor formulation; instead, the PL tensors are zero as a consequence of the spin-shell constraints, so these constraints imply that the particle has zero spin. Here we show that an analogous result holds for the 6D massive superparticle if the PL tensors are replaced by what we shall refer to as the super-PL tensors. It turns out that all super-PL tensors are zero as a consequence of the superparticle spinshell constraints, which implies that the quantum superparticle describes a massive supermultiplet of zero superspin. For n = 1 this is the 6D Proca supermultiplet of maximum spin 1, but to realize the full BPS-saturated (1, 1) supersymmetry it must be "centrally charged", which implies a doubling of the states. Throughout this paper, we make extensive use of the SU * (4) notation for 6D Minkowski spinors [16,17,18]. We begin with a brief review of this notation as it applies to the particle and superparticle in their standard phase-space formulations. Then we present the twistor formulation of the bosonic 6D particle, followed by a generalization to the 6D massive superparticle with manifest (n, 0) supersymmetry, confirming its BPS-saturated (n, n) supersymmetry. We conclude with a discussion of how the results obtained here fit into the general pattern of twistor formulations of particle mechanics models in D = 3, 4, 6 spacetime dimensions, and their relation to the division algebras R, C, H, and we comment on implications for the D = 10 case in relation to the octonions O. 6D preliminaries In SU * (4) notation, 6D vectors are anti-symmetric bi-spinors. In particular, the standard phase space coordinates for a point particle are (X αβ , P αβ ) (α, β = 1, 2, 3, 4) and the action for a particle of mass m is As for all other Lorentz 6-vectors, we raise indices using the alternating invariant tensor of SU * (4): Similarly, 6-vector indices may be lowered using the inverse alternating invariant tensor of SU * (4), defined such that where the brackets indicate "unit strength" antisymmetrization over enclosed indices. We remark here, for future use, that if the spinor components of P are interpreted as entries of a matrix P, then The canonical Poisson bracket relations following from the action (2.1) are The Poincaré Noether charges in spinor notation are and their non-zero Poisson brackets are (2.7) Pauli-Lubanski tensors As remarked in the introduction, there are two 6D analogs of the 4D Pauli-Lubanski spin vector. In SU * (4) spinor notation, the self-dual and anti-self-dual parts of the PL 3-form tensor are In the same spinor notation, the PL vector Ξ is ‡ To verify translation invariance of the PL tensors (i.e. that they have zero Poisson bracket with P), one needs the identities The Pauli-Lubanski tensors vanish identically when the Poincaré charges are expressed in terms of the phase-space variables (X, P) but not when expressed in terms of the twistor phase-space variables to be introduced later. The PL tensors themselves satisfy the identities 11) and the spinor relation expressing the fact that Ξ is a contraction of J with Σ is The main reason for the importance of PL tensors, for massive particles, is that the scalars constructed from them are proportional to Casimirs of the Poincaré group. In 6D there are two such scalars § (2.13) ‡ This corrects the expression given in [11]. § This count excludes the Casimir P 2 , which is not constructed from a PL tensor. Massive superparticle The minimal 6D spinor is a complex 4 of SU * (4), which can be traded for a (2, 4) of SU(2) × SU * (4) subject to a "symplectic reality condition". More generally, a set of n such spinors of the same chirality naturally transform as the (2n, 4) of USp(2n)×SU * (4), again subject to a "symplectic reality condition" (see e.g. [16]). The n minimal anticommuting spinors needed for a 6D superparticle with (n, 0) supersymmetry thus combine to form a single spinor Θ α i (i = 1, . . . 2n) which has 4n independent complex components. Using this notation, the action for the massive 6D superparticle with manifest (n, 0) supersymmetry is where Ω ij is the 2nd order antisymmetric invariant tensor of USp(2n); its inverse Ω ij will be defined such that The orthosymplectic phase-space 2-form defined by this action is invertible provided that the mass m is non-zero, and its inverse gives us the canonical Poisson bracket relations. In particular, one finds this way that where the mass-shell condition has been used to simplify the right hand sides (so one should first replace m 2 by −P 2 before attempting to verify Jacobi identities). The Lorentz Noether charge is now 17) and the (n, 0) supersymmetry charges are As reviewed in the introduction, the massive 6D superparticle with manifest (n, 0) supersymmetry actually has (n, n) supersymmetry. The (0, n) non-manifest supersymmetry Noether charges arẽ Using (2.16), one finds that We shall not need to know {X, X} P B , which is also non-zero, implying a non-commutative Minkowski spacetime in the quantum theory. One also finds, as expected, that Super-Pauli-Lubanski tensors We are now in a position to find supersymmetric analogs of the Pauli-Lubanski tensors, but we postpone discussion of this issue for Ξ because it is more simply addressed in the supertwistor formulation that we shall be developing later. Written as bi-spinors, the supersymmetric versions of the PL tensors (2.8) are One may verify that these bi-spinors have zero Poisson bracket with all supersymmetry charges provided that one makes use of the mass-shell constraint and the relatioñ which is valid for the superparticle as a consequence of the expressions (2.18) and (2.19) for the supercharges in terms of the phase superspace coordinates. A clarification is in order here. The existence of the "hidden" (0, n) supersymmetry charges is a special feature of the superparticle model under study. Should it not be possible to define super-PL tensors for (n, 0) supersymmetry that involve only the (n, 0) supercharges? The answer is a qualified yes. If our interest is in the quadratic Casimir of the (n, 0) supersymmetry algebra that generalizes the usual Σ 2 invariant of the Poincaré algebra (for example) then we may proceed by defining the new traceless bi-spinor This is equivalent to a 2nd-rank antisymmetric tensor, or 2-form, and hence also to a 4-form (the relevance of this observation will be apparent shortly). It has zero Poisson bracket with the Q supercharges, so its norm Υ 2 ≡ Υ α β Υ β α is a super-Poincaré invariant. This constructs a Casimir from Σ (+) and P, valid for the (n, 0) supersymmetry algebra; this is possible because in 6D we can decompose Σ into its self-dual and anti-self-dual parts. In other spacetime dimensions there is no analog of the strictly (n, 0) superextension of Σ that commutes with the (n, 0) supercharges (and the same is true for Ξ even in 6D). The standard resolution of this problem, for the (D − 3) form Σ, relies on the fact that there is a super-invariant extension of the (D − 2)-form found by taking the exterior product of a Σ with P; see e.g. [19] for the 4D case, which was generalized to arbitrary spacetime dimension in [20]. In 6D this 4-form is precisely our Υ. We have still to address the issue of the relation between Υ 2 and Σ 2 . Recall that Σ 2 is a contraction of Σ (+) with Σ (−) , but the definition of the latter in (2.22) involves the hidden (0, n) supersymmetries; moreover, one needs the superparticle mass-shell condition and the relation (2.23) between the (n, 0) and (0, n) supercharges to show that Σ (−) has zero Poisson bracket with the (n, 0) supercharges. This makes it appear that Σ 2 is defined only for the superparticle. However, if we use the relation (2.23) to rewrite Σ (−) in terms of the (n, 0) supercharges, then we find that This shows that the first of the identies of (2.11) remains valid for the super-PL tensors as we have defined them. A corollary is that What this shows is that the scalar Σ 2 , constructed as a Casimir for the (n, n) supersymmetry algebra of the superparticle is valid in full generality when considered as a Casimir for massive representations (P 2 = −m 2 = 0) of just the (n, 0) supersymmetry algebra. Twistor formulation of massive 6D particle We can solve the mass-shell constraint P 2 + m 2 = 0 of the action (2.1) by first setting where U is a USp(4) 4-plet (I = 1, 2, 3, 4) of SU * (4) spinors, and Ω is here the standard invertible antisymmetric invariant tensor of USp(4). Then, viewing U as the 4 × 4 matrix with entries U I α , we impose the constraint 0 = ϕ := det U + m 2 . (3.2) To verify that this solves the mass-shell constraint one needs the identity where ǫ IJKL is the USp(4) invariant alternating tensor. A corollary of (3.1) is that where the first equality is from (2.4). Choosing the upper sign for compatibility with (3.2), we see that the constraint ϕ = 0 is just the original mass-shell constraint in spinor form! Notice that the solution (3.1) of the original mass-shell constraint is invariant under local USp(4) transformations, so we can anticipate that new constraints associated to a new USp(4) ∼ = Spin(5) gauge invariance will emerge. Substitution for P giveṡ Let us define where Ω IJ is defined (as for Ω ij ) such that In general, we use Ω IJ (Ω IJ ) to lower (raise) USp(4) indices according to the convention (for arbitrary USp(4) 4-plet Z) that from which it follows that Given the definition of W α I , we have Λ IJ ≡ 0, so this must be imposed as a set of constraints when W α I is considered as a set of independent variables. These are the "spinshell" constraints; this terminology will be justified shortly. Imposing these constraints by Lagrange multipliers s IJ and the new mass-shell constraint by a Lagrange multiplier ρ, we arrive at the following twistor form of the action for a massive 6D particle: (3.10) The constraint functions Λ IJ generate the expected local USp(4) gauge transformations, via the canonical Poisson bracket relations Since det U is manifestly USp(4) gauge invariant, the additional constraint function has zero Poisson bracket with Λ IJ , and hence all constraints are first class. As a consistency check, let us verify that the physical phase space dimension is unchanged by the process that converts the standard massive particle action into the new twistor action. We started with a phase space of dimension 2 × 6 = 12 subject to a single first-class constraint, implying a physical phase space of dimension 12 − 2 = 10. We now have a phase space of (real) dimension 2 × (4 × 4) = 32 subject to 10 + 1 = 11 first-class constraints, implying a physical phase space dimension of 32 − 22 = 10. Gauge invariances The constraint functions Λ IJ generate the Spin(5) gauge transformations of the canonical variables, which are This is an invariance of the action provided that we assign the following gauge transformation to the Lagrange multiplier This Spin(5) gauge invariance is expected because it was introduced when we solved the mass-shell constraint P 2 + m 2 = 0, but what is the significance of the additional gauge invariance associated to the constraint ϕ = 0? To answer this question, we begin by observing that the additional non-zero gauge transformations are where λ(t) is the infinitesimal parameter, and This new opposite-chirality commuting spinor variable is essentially the inverse of U on the surface ϕ = 0 since, on this surface, A useful identity is This allows us to express P on the ϕ = 0 surface as Next, we observe that we may add to any gauge transformation the following "trivial" gauge transformation with parameter ξ(t): This is manifestly a gauge invariance, but a "trivial" one because the transformations are zero on solutions of the equations of motion. Now consider the linear combination One finds that the δ ′ ξ transformations of the canonical variables are those due to a reparametrization of the worldline time: We conclude that the additional constraint is associated with the time reparametrization invariance of the action, as expected from its equivalence to the original mass-shell constraint. Poincaré invariance In the new spinor variables, the Poincaré Noether charges are where we use the shorthand notation Using these expressions in (2.22), and the constraint det U = −m 2 , we find that From (2.12) it then follows that Notice that The left hand side is a Poincaré Casimir while the right hand side is proportional to the quadratic Casimir of the spin-shell group. This generalizes to 6D the observation for the massive 4D particle in [11], but the connection between the spin-shell constraints and the particle's spin is already evident from the expressions (3.24) and (3.25) because they show that all PL tensors are zero on spin-shell, and this tells us that the particle has zero spin. Supertwistors and the massive 6D superparticle We now turn to the massive superparticle with action (1.1), which has manifest (n, 0) supersymmetry, and we solve the mass-shell constraint as in (3.1). As before this leads to the new mass-shell constraint 0 = det U + m 2 ≡ ϕ. Substitution for P as before now leads to The definition of W leads to the identity As before, to promote W to an independent variable we must impose this identity as a constraint, so the action in the new variables is where The phase-space variables are the components of a pair of 6D supertwistors (I = 1, 2, 3, 4 rather than I = 1, 2) but the 6D superconformal invariance is broken by the ϕ = 0 constraint. The new superparticle action (4.4) is manifestly Lorentz invariant, with Noether charges There is no fermion bilinear term, as could have been anticipated from the fact that the anticommuting variables µ i I are now Lorentz scalars. The action is also invariant under all (n, n) supersymmetries, with Noether charges are This may be verified using the Poisson bracket relation (3.11) and the new (symmetric) Poisson bracket relations In particular, the spin-shell constraints are (n, n) supersymmetric because Using the supertwistor expressions for the super-Poincaré charges in the expressions (2.22) for the super-Pauli Lubanski 3-form Σ, we find that Formally, this is identical to the result that we found for the bosonic particle; the only difference is that the spin-shell constraint functions, given by (4.3), now include terms bilinear in the anticommuting variables µ i I . This result should not be a surprise because the spinor variables U are inert under supersymmetry and, as we have just seen, the superparticle extension of the spin-shell constraint functons are supersymmetric. It is now obvious how to find the supersymmetric extension of the Pauli-Lubanski vector Ξ of (2.9). We just return to the twistor expression (3.25) and re-interpret Λ IJ as the superparticle spin-shell constraint functions. This gives us where Λ IJ are now the superparticle spin-shell contraint functions. Quantum theory If we define a massive particle of zero superspin to be one for which all super-PL tensors are zero, then the spin-shell constraints of the massive superparticle tell us that it has zero superspin. The canonical anticommutation relations of the 8n fermionic phasespace variables of the action (3.10) are This implies a supermultiplet with 2 4n independent polarization states. For n = 1 this gives us a massive supermultiplet with 16 components, and zero superspin tells us that this must be the 6D Proca multiplet, for which the bosonic content is one massive vector and three scalar fields. This is a massive supermultiplet of (1, 0) 6D supersymmetry. If we declare the particles of this supermultiplet to carry a central charge, which can be done by allowing superparticle wavefunction to be complex, then it is also a supermultiplet of (1, 1) 6D supersymmetry, with a central charge saturating the BPS unitarity bound implied by supersymmetry. In other words, we have the choice of quantizing preserving only the manifest (1, 0) 6D supersymmetry, in which case we can impose a reality condition on the superparticle wavefunction, so as to get the Proca supermultiplet, or we can insist on preserving the full (1, 1) 6D supersymmetry, in which case we get a pair of Proca supermultiplets with equal and opposite central charges. The latter supermultiplet is exactly what one gets by keeping a single massive level of the Kaluza-Klein tower resulting from toroidal compactification to 6D of the 10D Maxwell supermultiplet. Discussion In the twistor formulation of particle mechanics, in D spacetime dimensions, the usual mass-shell constraint is solved by expressing the D-momentum as a bi-spinor. The spinor variable introduced by this solution is then viewed as a new phase-space coordinate, and its canonical conjugate is another spinor. Taken together these canonically conjugate spinors constitute a twistor, a spinor of the conformal group. However, for this construction to work, it must be that the physical phase space has the same dimension as it did originally, and this is a significant constraint. For D = 3, 4, 6 we have D = 2 + K, where K is the dimension (over R) of K = R, C, H (the reals, complex numbers and quaternions), and a minimal spinor is a doublet of Sl(2; K); in addition, a set of N such spinors is an N-plet of the internal symmetry group U(N; K) [16]. Since a twistor comprises a pair of spinors, each of which has 2N K-valued components, the total dimension over R of the vector space spanned by N twistors is 4NK. However, since ¶ 2 dim U(N; K) = N(N + 1)K − 2N , (5.1) the combined effect of U(N; K) spin-shell constraints and the associated U(N; K) gaugeinvariance is to reduce the phase space to one with dimension 2N − N(N − 3)K. On the other hand, the physical phase dimenson is 2(D − 1) = 2(1 + K). This means that 2(N − 1) = (N − 1)(N − 2)K, assuming the absence of any constraints other than the spin-shell constraints; allowing for the possibility of additional constraints we thus arrive at the inequality For the twistor form of the massless point particle in dimensions D = 3, 4, 6 we need N = 1, in which case the above inequality is saturated. The massive particle requires both N > 1 and at least one additional constraint (in order to solve the mass-shell condition) and this is compatible with the above inequality only for N = 2, in which case (5.2) is satisfied with the left hand side of (5.2) equal to 2. This allows either two additional second-class constraints or one additional first-class constraint but, as we explain below, the twistor form of the massive particle must have one additional firstclass constraint. These conditions are indeed realized by the double-twistor formulation of the massive particle, as we have shown here for D = 6. Our result thus complements and completes earlier work on twistor constructions of this general type. One may ask why there is an additional constraint for the massive particle. Actually, one should expect an additional constraint because of the worldline time reparametrization invariance of the action, so what has to be explained is why no such additional constraint is needed for the massless particle. The answer is that in the massless case, but not in the massive case, one can combine a time-reparametrization with a spin-shell gauge transformation to arrive at a "trivial" gauge transformation: one for which the transformations are all zero for solutions of the equations of motion. As such gauge transformations have no physical effect, time-reparametrization invariance is not independent of the spin-shell gauge invariance for a massless particle. For a massive particle the equations of motion differ, such that the spin-shell constraint functions no longer suffice to generate all non-trivial gauge transformations, so an additional constraint associated to time reparametrization invariance is required. Another way to see how the possibilities for a twistor formulation of particle mechanics are limited is no notice that there must be a coincidence (or near coincidence) between the spin-shell group U(N; K) and Wigner's "little group" (the subgroup of the Poincaré group relevant to the classification of elementary particles) with N = 1 applying to massless particles and N = 2 to massive particles. The reason is that the Pauli-Lubanski spin tensors, which are identically zero when expressed in terms of the usual phase space variables of a spinless particle, are zero when expressed in twistor variables only as a consequence of the spin-shell constraints. Consequently, the littlegroup generators become identified with the spin-shell group generators in a standard Lorentz frame. The massive 4D particle is a mild exception to this rule because the spinshell group is U(2) but the rotation group is SU(2) (a "near coincidence"); however, the U(1) factor drops out of the Pauli-Lubanski vector, which becomes identified with the generators of space rotations. For the massive 6D particle considered here, the spin-shell group is USp(4) ∼ = Spin(5), which has the same Lie algebra as the rotation group, and the Pauli-Lubanski 3-form is equivalent in a standard Lorentz frame to the adjoint 10 of the Spin(5) algebra, spanned by the spin-shell constraint functions. In addition to finding the twistor formulation of the massive 6D particle, we have extended the construction to a supertwistor formulation of the massive superparticle. A nice feature of this construction is that it makes manifest the full supersymmetry invariance, which is always that of a BPS superparticle with (n, n) supersymmetry for some n Exactly the same action would result from a supertwistor reformulation of the "kappa-symmetric" BPS superparticle action for which the (n, n) supersymmetry is manifest from the start. This follows from the general arguments of [15], summarised in the introduction, but it was also verified explicitly for D = 4 in [11]. Implicit in our results is a supertwistor formulation of the massless 6D superparticle with (n, n) supersymmetry, obtained by setting m = 0. Notice that this massless superparticle action cannot be equivalent to the standard massless superparticle action with manifest (n, 0) supersymmetry because (and in contrast to the massive case) the latter does not have a hidden (0, n) supersymmetry. Also, there is no previously known supertwistor formulation of the massless (n, n)-supersymmetric superparticle (only the (n, 0) cases are known). We suspect that our indirect solution to this problem is not the most economical one, but we have not investigated this. The spin-content of any relativistic particle mechanics model is determined by the Pauli-Lubanski (PL) tensors (which are functions on phase space in the context of classical particle mechanics). All PL tensors are zero for a massive particle of zero spin; for the twistor form of the particle's action this is true as a consequence of the spin-shell constraints (hence the terminology). We have established a similar result here for the supertwistor form of the massive 6D superparticle: all super-PL tensors are zero as a consequence of the spin-shell constraints. In the quantum theory this implies that the superparticle describes a 6D supermultiplet of zero superspin. In the simplest (n = 1) case this is the 6D Proca supermultiplet for a massive vector field, three scalar fields and their spin-1/2 superpartners, which must be centrally charged if we insist on quantizing preserving the full (1, 1) supersymmetry. Our construction of the super-PL tensors differs from the standard one. In fact, this terminology is not used in the standard construction of super-Poincaré Casimirs, for good reason. For example, for D = 4 there is no N = 1 supersymmetric extension of the usual Pauli-Lubanski spin-vector that commutes with the supersymmetry generator. The problem is milder in 6D because of special features of this dimension (one can use only the self-dual part of Σ) but it is still true that not all 6D PL tensors have a strictly (n, 0) extension that commutes with the (n, 0) supercharges. In 4D this problem is solved by the existence of a supersymmetric extension of the antisymmetric tensor constructed by taking the exterior product of the momentum generator with the PL spin-vector. The same construction can be used in 6D, and higher dimensions, but the method has not yet been developed so that it applies to all super-Poincaré Casimirs. Our superparticle approach provides an alternative route to the construction of super-Poincaré Casimirs: by taking account of the "hidden" supersymmetries of the superparticle model [15], we find a super-PL tensor invariant under all supersymmetries. We have shown for the simplest case how the scalars constructed from these super-PL tensors become modelindependent Casimirs for the manifest supersymmetry algebra. We suspect that this idea could lead to a simple general construction of all super-Poincaré Casimirs, but we leave this to the future. R C H O The original suggestion of a close relationship between (Minkowski space) supersymmetry in spacetime dimensions D = 3, 4, 6, 10 and the division algebras was based partly on the coincidence that the double cover of the Lorentz group in these dimensions is Sl(2; K) for K = R, C, H, O [16], as confirmed by Sudbery for K = O by a suitable definition of Sl(2; O) [21]. The results reported here provide further evidence of this relationship for the D = 6 case, as would be manifest if we had used 2-component quaternionic spinors instead of 4-component complex spinors; indeed, a quaternionic formulation of the massless D = 6 superparticle (although not its supertwistor version) was worked out in [22]. The work reported here is potentially of relevance to the massless D = 10 superparticle because the massive 6D superparticle can be viewed as a massless 10D particle in a spacetime that is a product of 6D Minkowski space with a 4-torus, with a fixed non-zero 4-momentum on the 4-torus. This is easily seen from the usual phasespace formulation of the massive 6D superparticle but it is not at all obvious from its supertwistor phase-space formulation. If this 10D origin could be understood in 6D twistor terms, it could provide a clue to some novel reformulation of the 10D massless superparticle. We should point that there is already an octonionic formulation of the massless 10D superparticle [23,24], and a twistor version of it was proposed in [25]. Another D = 10 result involving both the octonions and twistors was presented in [26]: the super-Maxwell field equations for D = 3, 4, 6, 10 can be solved (by a twistor transform) in terms of a K-valued worldline superfield satisfying a "K-chiral" constraint. Another obvious question is whether the results reported here for the massive 6D superparticle could be generalised to 10D; i.e. to the massive D = 10 superparticle with (1, 0) supersymmetry. This would be of great interest because the action actually has (1, 1) 10D supersymmetry and is just a gauge-fixed version of the D0-brane action of IIA superstring theory [15]. However, we have nothing definite to say about this case, and so leave it to future investigations.
7,695.8
2015-07-18T00:00:00.000
[ "Physics" ]
COMPARATIVE MULTICENTER STUDY OF TREATMENT OF MULTI-FRAGMENTED TIBIAL DIAPHYSEAL FRACTURES WITH NONREAMED INTERLOCKING NAILS AND WITH BRIDGING PLATES OBJECTIVE: A prospective, randomized study to compare patients with closed, multi-fragmented tibial diaphyseal fractures treated using one of two fixation methods undertaken during minimally invasive surgery: nonreamed interlocking intramedullary nails or bridging plates. MATERIALS AND METHODS: Forty-five patients were studied; 22 patients were treated with bridging plates, 23 with interlocking nails without reaming. All fractures were Type B and C (according to the AO classification). RESULTS: Clinical and radiographic healing occurred in all cases. No cases of infection occurred. The healing time for patients who received nails was longer (4.32 weeks on average) than the healing time for those who received plates (P = 0.026). No significant differences were observed between the two methods regarding ankle mobility for patients in the two groups. CONCLUSIONS: The healing time was shorter with the bridging plate technique, although no significant functional differences were found. INTRODUCTION Most tibial fractures occur in the long bones of adults.A review of the literature did not provide papers that were appropriate to reach a decision about the best management method. 1Usually, low-energy fractures are not treated surgically, but rather by bloodless reduction and plastered immobilization, while for fractures secondary to high-energy traumas, a trend exists towards using the surgical approach. With the purpose of comparing surgical methods of treatment, a prospective, randomized, multicenter study on multi-fragmented tibial diaphyseal fractures was conducted in 2 institutions, namely the Departments of Orthopedics of the University of São Paulo and of the Federal University of São Paulo. MATERIALS AND METHODS The inclusion criteria for this study were patients with closed, multi-fragmented tibial diaphyseal fractures.Threshold cases of very proximal or distal fractures were excluded using the square rule 2 .A protocol was developed that included recording patients' general data plus the cause and location of the fracture, as well as its association with fibular fracture, its classification, the time from accident to surgery, and the follow-up period (Tables 1, 2, and 3).The surgical treatment procedures employed minimally invasive technique with nonreamed interlocking intramedullary nails or bridging plates.Patients were randomly allocated at the surgical center. Forty-five patients were studied, with 22 patients receiving bridging plates and 23 receiving interlocking nails, and with the location of treatment as follows: 26 patients (13 plates, 13 nails) at the Federal University of São Paulo, and 19 patients (9 plates, 10 nails) at the Institute of Orthopedics and Traumatology, Faculty of Medicine, University of Sao Paulo, SP (Brazil).The study period was from January 2002 to June 2003.The follow-up period varied between 6 months to 1 year.This protocol was approved by the Ethics Committees of the participant institutions and signed informed consent obtained from each included patient. The surgical technique consisted of indirect reduction for both forms of osteosynthesis without violation of the fracture focus.The blocked intramedullary nails used were AO universal steel nails introduced with no medullary channel reaming.We used a narrow plate for large fragments (4.5 mm AO). The average values for age, time from accident to surgery, and follow-up period were, respectively, 34 years, 23 days, and 51 weeks for the group treated with intramedullary nails, and 34 years, 34 days, and 32 weeks for the group treated with bridging plates. We used Student's t test for independent samples 3 to compare the numerical measurements of patients for these preliminary conditions In the group treated with intramedullary nails, 4 patients were women and 19 were men, while in the group treated with a bridging plate, 3 were women and 19 were men. Eleven individuals were smokers and 12 were nonsmokers in the group treated with nails, while 13 were smokers and 7 were nonsmokers in the group treated with plates. Tibial fracture was accompanied by fibular fracture in 18 patients of the group treated with nails and in 10 patients of the group treated with plates; 5 and 12 patients, respectively, presented with intact fibulae. We used the chi-square test or Fisher's exact test (as required for each case 2 ) to compare groups according to categorical variables. RESULTS The results obtained regarding healing time as well as complications such as angulation, shortening, infection, healing delay, pseudoarthrosis, and ankle mobility are presented in Tables 4 and 5 for patients who received bridging plates and blocked intramedullary nails, respectively. The results of the statistical analysis of cases are shown in Table 6. The groups were homogeneous concerning age, and on average, they had the same time from accident to surgery, while patients who received nails had significantly longer follow-up periods. The groups were not different regarding sex or smoking.More isolated fractures were found among the individuals who received nails as compared with those who received plates. The healing time for patients who received nails was significantly longer, by an average of 4.30 weeks, than the healing time for those who received bridging plates (P = 0.019, Student's t test applied for independent samples) (Table 7). There were no significant differences when comparing the two groups for the following parameters: infection, healing delay (P = 0.109, Fisher's test), pseudoarthrosis, angulation > 10 degrees, shortening > 1 cm, and ankle mobility (P = 0.243, Fisher's test).This study was designed to compare the efficacy of treatment of closed, multi-fragmented tibial diaphyseal fractures with nonreamed interlocking intramedullary nails and with bridging plates.In both types of osteosynthesis, the aim was to apply the principle of fixation with relative stability.By using this principle for the fractures, the deformation ratio is better tolerated and leads to a lower degree of implant loading.In both cases, healing is favored by the nonviolation of the fracture focus, the formation of a secondary bone callus being expected. According to the AO classification, multi-fragmented For multi-fragmented fractures, surgical treatment using the open reduction technique may compromise the blood supply and lead to a healing disorder.Wide surgical exposure is required to achieve anatomical reduction and fixation with the absolute stability principle; however, in multi-fragmented fractures, this strategy can lead to a severe disturbance of the blood supply. 2,4For this reason, open reduction surgery is reserved only for single-trace diaphyseal fractures, where direct or primary healing is expected.It should be emphasized that the deformation ratio in such cases is less tolerated, and small technical inaccuracies greatly increase the loading of the implant and can lead to nonhealing and, therefore, to a defective osteosynthesis. ][7][8][9][10] This was the main argument and also the main difficulty in this study, since most staff at the various Orthopedics Services who were contacted were not willing to participate in this study, alleging that intramedullary nails was the standard treatment for these fractures.Thus, only 3 institutions participated in this study, although one of them did not present enough data for inclusion in the protocol. Bridging plates are used more often in diaphyseal fractures that compromise the proximal and distal ends of the tibia [11][12][13][14][15] ; however, in this study this aspect was not considered, since patients were randomly allocated.The placement of the plate on the anterior-medial tibial face is technically easier and leads to less compromise of its vascularization. 16e compared similar groups and observed that the clinical and radiological parameters analyzed, such as articular function, deformities, infection, and pseudarthrosis, were similar in both groups (Tables 4 and 5).The healing time was the only significant difference found.On average, bone healing in patients receiving bridging plates occurred 4 weeks earlier compared with patients who received nonreamed interlocking intramedullary nails (Table 7). We can conclude that the healing times were significantly shorter in patients undergoing surgery with the bridging plate technique, and the functional results were not different among patients of both groups. Table 3 - Distribution of patients by group and fracture site Table 2 - Distribution of patients per group and according to the AO classification Table 1 - Distribution of patients per group and cause of fracture tibial diaphyseal fractures are named 42 (4 for tibia; 2 for diaphysis) and subdivided into B and C. Type B fractures present contact between the proximal and distal fragments after reduction, while Type C fractures are more fragmented and do not show this contact.The statistical analysis showed that both groups (nails and plates) were homogeneous concerning age, time from accident to surgery, sex, and smoking; the only difference was the increased inci- Table 5 - Individual results for patients undergoing surgical treatment with intramedullary nails: ordinal number, healing time, infection, healing delay, pseudoarthrosis, angulation, shortening, and ankle mobility Table 4 - Individual results for patients undergoing a surgical treatment with bridging plates: ordinal number, healing time, infection, healing delay, pseudoarthrosis, angulation, shortening, and ankle mobility Table 7 - Descriptive measurements of patients' healing time per group
2,023
2006-08-01T00:00:00.000
[ "Engineering", "Medicine" ]
PhaVIP: Phage VIrion Protein classification based on chaos game representation and Vision Transformer Abstract Motivation As viruses that mainly infect bacteria, phages are key players across a wide range of ecosystems. Analyzing phage proteins is indispensable for understanding phages’ functions and roles in microbiomes. High-throughput sequencing enables us to obtain phages in different microbiomes with low cost. However, compared to the fast accumulation of newly identified phages, phage protein classification remains difficult. In particular, a fundamental need is to annotate virion proteins, the structural proteins, such as major tail, baseplate, etc. Although there are experimental methods for virion protein identification, they are too expensive or time-consuming, leaving a large number of proteins unclassified. Thus, there is a great demand to develop a computational method for fast and accurate phage virion protein (PVP) classification. Results In this work, we adapted the state-of-the-art image classification model, Vision Transformer, to conduct virion protein classification. By encoding protein sequences into unique images using chaos game representation, we can leverage Vision Transformer to learn both local and global features from sequence “images”. Our method, PhaVIP, has two main functions: classifying PVP and non-PVP sequences and annotating the types of PVP, such as capsid and tail. We tested PhaVIP on several datasets with increasing difficulty and benchmarked it against alternative tools. The experimental results show that PhaVIP has superior performance. After validating the performance of PhaVIP, we investigated two applications that can use the output of PhaVIP: phage taxonomy classification and phage host prediction. The results showed the benefit of using classified proteins over all proteins. Availability and implementation The web server of PhaVIP is available via: https://phage.ee.cityu.edu.hk/phavip. The source code of PhaVIP is available via: https://github.com/KennthShang/PhaVIP. Introduction Bacteriophages, or phages for short, are viruses that can infect bacteria.They are the most widely distributed and abundant biological entities in the biosphere [1], with an estimated population of more than 10 31 particles [2].Phages play an important role in modulating microbial system dynamics by lysing bacteria and mediating the horizontal transfer of genetic material [3].In addition, there are accumulating studies showing that phages have an important impact on PHAVIP multiple applications, such as the food industry [4], disease diagnostics [5], engineering bacterial genomes [6], and phage therapy [7]. A fundamental step to promote phages' applications in these fields is phage genome annotation.Phages' proteins are highly diverse, and their current annotations are far from complete.For example, only 33% of proteins in the RefSeq phage protein database have annotations.The annotated phage proteins can be roughly divided into two groups: virion and non-virion proteins.Phage virion proteins (PVPs) are phage structural proteins that make up phage outer protein shells [8].They were regarded as one major evidence in phage taxonomy classification by the International Committee on Taxonomy of Viruses (ICTV).During the infection, PVP binds to the host's receptors, aiding the insertion of the phage's genetic materials into the host cell.Identifying PVPs is a fundamental step to understanding their biological properties and mechanisms of host cell binding.Due to their ubiquity and functional importance, PVPs have been leveraged in multiple downstream applications.For example, PVPs can be used as marker genes in phage host prediction [9] and prophage identification within bacterial genomes [10].Although PVPs have become commonly used features in several phage analysis tasks, accumulating research on the non-PVPs show that we may underestimate their importance.For example, non-PVPs usually play key roles in phages' lifecycles, including replication and packaging.Among non-PVPs, "integrase" and "excisionase" are two widely accepted marker genes for classifying the lifestyle of phages.Based on these marker genes, several phage lifestyle prediction methods were developed [11,12].In addition, some non-PVPs are important for the binding of phage tail fibers to host receptor proteins.For example, the endoglycosidase of Salmonella virus P22 will hydrolyze lipopolysaccharide and destroy the O-specific chain for phage attachments [13].What's more, understanding the non-PVPs can help utilize phages for engineering bacterial genomes [6], regulating gene expression, and introducing novel functions to change cell physiology [14,15]. Because PVPs and non-PVPs have different functions, distinguishing them can extend our knowledge about phage properties and functions.Although there are experimental methods for PVP annotation, such as protein arrays and mass spectrometry, they are usually time-consuming, labor-intensive, and costly.Thus, they cannot catch up with the speed of newly identified phages by high-throughput sequencing.For example, as reported in [16], only 11% of proteins can be annotated using the mass spectrometry method.Thus, computational PVP classification is still the major choice for handling large-scale input data.The major challenge for computational PVP classification is the high diversity of proteins in phages.For example, most structural proteins encoded by tailed phages, except for portal proteins, can not be identified through pair-wise sequence alignments.According to the latest RefSeq database downloaded before Dec. 2022, 66% of proteins are marked as "hypothetical protein", meaning that these proteins cannot be aligned to annotated proteins.Thus, fast and accurate computational methods to predict and classify diverged PVPs are urgently needed. RF-based ensemble model Deep learning DeePVP [28] One-hot CNN VirionFinder [29] One-hot and physicochemical properties CNN PhANNs [30] K-mer frequency ANN iVIREONS [31] Single amino acid ANN To overcome the challenge of high sequence diversity, machine learning models are commonly used for classifying PVP and non-PVP.Most of these tools have been discussed and evaluated by several comprehensive reviews in the past three years [32,33,8].Table 1 summarizes these tools together with their employed feature encoding and machine learning algorithms. As indicated in Table 1, four learning models (SVM, NB, RF, and SCM) are commonly used in traditional machine learning-based methods.Ensemble-based methods utilize multiple models or training sets.For example, Meta-iPVP [26] utilizes a novel feature-representing scheme and four machine-learning algorithms to encode seven input features into a probabilistic matrix.Then, the generated probabilistic matrix is fed into the SVM model to classify PVPs.More recently, deep learning-based methods such as VirionFinder [29] and DeePVP [28]) have been proposed for structural protein identification.Both of them use convolutional neural networks (CNNs) as classifiers.The comparison between the existing tools showed that CNN is an effective method for extracting abstract features from biological sequences [8]. Although these tools have achieved promising performance, they still have a couple of limitations.First, except for PhANNs [30] and DeePVP [28], all these tools are binary identifiers, which can only classify the input proteins as PVP or non-PVP.However, a more detailed multi-class classification of PVPs is also in demand to assign proteins to well-defined annotations (i.e., major tail, minor tail, and baseplate).But the best F1-score of PhANNs and DeePVP on multi-class classification can only reach nearly 0.7 on the benchmark dataset.Second, the database of the existing tools is mostly out-of-date.However, only PhANNs provided scripts for re-training or re-constructing the models as reported in [8].Lacking this function hinders many tools from achieving more generalized and robust predictions for newly discovered phages.Third, Although one-hot encoding and k-mer frequency encoding are widely used in the PVP classification task, they both have disadvantages.For example, as indicated in [34], using one-hot encoding for protein sequences will return sparse matrices, leading to the curse of dimensionality problem in the machine learning model.k-mer frequency encoding fails to maintain the original amino acids' organization in the raw sequences.Only the predicted PVP will be classified into more detailed annotations. Overview In this work, we present a method named PhaVIP (Phage VIrion Protein) for phage protein annotation.It has two functions.First, it can classify a protein into either PVPs or non-PVPs (binary classification task).Second, it can assign a more detailed annotation for predicted PVPs, such as major capsid, major tail, and portal (multi-class classification task with seven types of PVPs).To construct a complete and comprehensive dataset, we downloaded the latest annotations of phage proteins from the RefSeq database (Dec.2022) to train and test PhaVIP.The pipeline of PhaVIP is shown in Fig. 1.First, to address the shortages of the existing encoding methods, we employ chaos game representation (CGR) to encode proteins into images.Previous works show that using k-mer frequency helps distinguish proteins of different functions.However, existing models such as CNN are not optimized for learning the associations of k-mers and their frequencies.In our design, CGR can encode k-mer frequency into images, allowing us to leverage an image classification model, Vision Transformer (ViT), from computer vision to capture and learn the patterns from CGR images.We leverage the self-attention mechanism in ViT to learn the importance of different sub-images and their PHAVIP associations for protein classification.In addition, because the length of the proteins varies from 10 2 to 10 3 , applying CGR allows encoding sequences of highly different lengths into images with the same resolution.Thus, we expect that this combination can lead to better results than existing deep learning models because of the success of the ViT in image classification.In the experiments, we tested PhaVIP on multiple datasets with increasing difficulty.The comprehensive comparison with the existing methods shows that PhaVIP renders better and more robust performance.In addition, we designed two case studies to show the application of PVPs and non-PVPs for downstream phage analysis.These case studies reveal that PhaVIP can provide useful features to improve the accuracy of phage taxonomy classification and host prediction. Methods and materials To use machine learning methods for classifying PVP and non-PVP, the input proteins need to be encoded into numerical values.Thus, a practical and informative sequence encoding method is crucial for classification.In this work, we applied Chaos Game Representation (CGR) to encode protein sequences.CGR is a generalized Markov chain and allows one-to-one mapping between the image and the sequence [35].In addition, CGR has already shown promising results in encoding biological sequences, such as generating evolutionary trees [36] and finding antimicrobial resistant gene [34]. Because CGR can represent protein sequences using unique images, inspired by pattern recognition problems in computer vision (CV), we apply the ViT model to extract and learn features from the CGR image.The attention mechanism in ViT can reveal the representative regions in the image and learn the associations between different parts of the image [37].Several large-scale benchmark datasets in CV have shown that ViT outperforms traditional models, such as CNN, on image classification.All these features prompt us to employ ViT for PVP classification. In the following sections, we will first introduce how CGR encodes protein sequences into unique images.Then, we will describe the ViT model optimized for the PVP classification task.Finally, we will introduce how we collect and generate the PVP datasets used in the experiments. CGR encoding The CGR was first developed to construct fractals from random inputs and later extended to encode DNA sequences [38].The inputs to the CGR are sequences, and the outputs are numerical matrices/images representing the sequences.The basic idea of CGR is to map each nucleotide or amino acid to a unique coordinate in a 2D space.A toy example is given in the right panel of Fig. 2. (reproduced from [38]); Right: the process of determining the four pixels for CATG using CGR. To encode the protein sequences into CGR images, we apply the n-flakes method [39] and use frequency chaos game representation (FCGR) to produce images of the same resolution.The equations of the n-flakes method are given in Eqn. 1 [39]. PHAVIP j is the vertices ranging from 0 to n, which is set to 20 for amino acids.Then, FCGR can be generated by counting the points of the CGR based on a pre-defined grid.Specifically, the algorithm will split the CGR image into N × N regions.Then, the number of points in the region will be used as the region's frequency to compress the CGR, leading to an FCGR matrix of dimension N × N for all input sequences of different lengths.In this work, we employ the R package 'Kaos' to encode protein sequences into FCGR images.Then, we set N = 64 to generate R 64×64 images as the representation of the protein sequences. Basic structure of ViT After encoding the protein sequences into R 64×64 images, we employ ViT for PVP classification.As shown in Fig. 1, the inputs to our ViT model are FCGR images, and the output of the ViT is the probability of the protein being PVP.If the protein is predicted as PVP, our ViT will assign a more detailed annotation for the PVP. Patch splitting and embedding To feed an FCGR image to ViT, we will reshape the FCGR image , where the dimension of each image patch is R M ×M , and N is 64 2 /M 2 .In our design, M is set to 16 by default, and the length of the input sequence N will be 16.Then, we can use Eqn. 2 to generate inputs to the Transformer model. Here, g i m ∈ R 1×M 2 is the flattened 2D patch at position i, corresponding to a "word" token in Transformer for natural language modeling [40].I is the index of the position of each patch x i m in the input FCGR image.H e and H m are learnable linear projection matrices for image patch and positional embedding, respectively. The Transformer model The architecture of the Transformer model in Fig. 1 is the same as the original design in [40].The equations of the Transformer are listed in Eqn. 3. The first function is the multi-head attention mechanism (MSA layer), which can extract the importance of patches and learn their associations.Then linear projections (MLP layer) are employed to capture information from each patch simultaneously.Layer normalization (LN) [41], and residual connections [42] are applied before and after each block to prevent gradient exploding and gradient vanishing, respectively.In the last layer, PHAVIP we use the SoftMax function to estimate the probability of a protein being a PVP.If the protein is predicted as a PVP, Z (2) (Eqn.3) will be fed to a multi-class classifier to predict a more detailed annotation. Model training Because there are two tasks in PhaVIP: classifying the PVP and non-PVP sequences (binary classification task) and classifying seven types of PVP (multi-class classification task), we train classifiers for them separately.As introduced in [43], pre-training the Transformer model can improve the performance of the downstream task.Thus, we first apply an end-to-end method to train the binary classification model.Then, we fix the parameters in the Transformer encoder and fine-tune a new classifier layer for the multi-class classification model.Binary cross-entropy (BCE) loss and L2 loss are employed for the binary classification and multi-class classification, respectively.We employ Adam optimizer with a learning rate of 0.001 to update the parameters for both tasks.The models are trained on HPC with the GTX 3080 GPU unit to reduce the running time. Data collection and experimental setup Although several PVP datasets have been constructed [8], the latest dataset constructed by [30] was based on the protein annotations released before June 2020.In addition, some annotations of phage protein can be updated regularly in the RefSeq database.For example, as the author of DeePVP [28] reported, the protein YP_006383517.1 was not annotated as PVP until Oct 2021, and this protein was re-annotated as a tail protein in the current version.Thus, in this work, we updated the PVP classification dataset by downloading all the latest annotations from the RefSeq viral protein database (Dec 2022).Following the guidelines of the third-party review [8], we first recruited proteins that belong to phages.Then, the proteins with low-confidence annotations, such as "hypothetical protein", "similar to", "xx-like", "unnamed", and "putative" were removed.We extracted structural protein sequences by searching the keywords, such as "portal", "capsid", "tail", "fiber", "tape measure", "baseplate", and "structural".The non-structural proteins were searched using the enzymes' names, such as annotations ending with "ase".In addition, we also used other keywords, such as "transcription", "holin", "lysin", and "regulator", to construct the non-PVP set.To remove the potential redundant sequences, we employed CD-HIT [44] to cluster sequences that have above 90% similarity and used the longest sequence to represent each cluster.Finally, our dataset contains 35,213 PVP sequences and 46,883 non-PVP sequences. Splitting the dataset We split our PVP dataset with increasing difficulty when constructing the training and test set.There are two tasks for PVP classification: classifying the PVP and non-PVP sequences (binary classification) and predicting the PVP types (multi-class classification).In the binary classification task, we use all the proteins for the data partition.In the multi-class classification task, we use the protein annotated with "portal", "major capsid", "minor capsid", "major tail", "minor tail", "baseplate", and "tail_fiber" to construct the multi-class classification dataset.All the remained PVPs are labeled as "other".We select the seven classes because they represent the dominant structural protein roles and contain enough sequences (> 100) for training and testing. Splitting by time As mentioned in [8], splitting training and test set by time is a widely used data partition method, which mimics the application scenario of using known PVPs to discover new ones.In this dataset, proteins released before Dec. 2020 comprise the training set, while proteins released after that comprise the test set.Finally, we have 27,704 PVP sequences and 36,778 non-PVP sequences for training, and 7,509 PVP sequences and 10,103 non-PVP sequences for testing in the PVP classification task.To balance the dataset, we randomly sampled non-PVP sequences to maintain the same number of samples in the binary classification as suggested in [8].In the multi-class classification, we keep the original data distribution following [30]. Splitting by similarity To test PhaVIP's performance in classifying diverged PVP, we constructed a hard case where the test sequences share low similarity with the training proteins.Specifically, we applied the all-against-all BLASTP search to our PVP dataset and calculated the product of the pairwise identity and alignment coverage, which is the ratio of aligned length to the length of the query sequence.Then, we employed the data partition strategy proposed in [45] to PHAVIP create training and test data with a specified maximum similarity between train and test.In this work, we chose 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9 as the thresholds and employed stratified sampling to split the training and test sets. Metrics As mentioned in [8], the widely used metrics for evaluating PVP classification performance are precision, recall, and F1-score.Their formulas are listed in Eqn.4,Eqn. 5, and Eqn.6: For binary PVP classification, true positive (TP), false negative (FN), and false positive (FP) represent the number of corrected identified PVPs, the number of PVPs misclassified into non-PVPs, and the number of falsely identified PVPs, respectively.We will also report the Area Under the ROC Curve (AUCROC) for comparison.For the multi-class classification task, we will calculate precision, recall, and F1-score for each class. Result In the experiment, we validate our pipeline on several datasets and compare PhaVIP against the state-of-the-art methods mentioned in the third-party review [8], including VirionFinder [29], PhANNs [30], DeePVP [28], Meta-iPVP [26], PVP-SVM [22], and PVPred-SCM [17].Out of these tools, only PhANNs provided source codes for re-training or updating the reference database.Thus, we are able to retrain PhaANNs for both the binary and the multi-classification tasks using the suggested hyperparameters.Other tools did not provide a retraining function.Thus, we applied them to the test data directly. In the following sections, we will first evaluate the PVP classification performance.Then, following [28], we will show a case study of classifying PVP on the mycobacteriophage PDRPxv genome, a newly identified phage that is a candidate therapy for pathogenic Mycobacterium.Finally, we investigate whether using classified PVPs and non-PVPs can benefit two important phage analysis tasks: phage taxonomy classification and host prediction. PHAVIP To improve the robustness of the model, we trained PhaVIP and PhANNs using ten-fold cross-validation.First, we split our training set into ten subsets.Then, we iteratively selected nine subsets for training and one subset for validation.The model that achieves the best performance on the validation set was kept for future experiments.For other methods, we used the provided models with the suggested parameters on the test proteins.The ROC curves of all the methods are shown in Fig. 4. The AUCROC reveals that PhaVIP has more reliable results on the dataset split by time.Because PVPred-SCM does not output a score of the prediction, we only report its recall and false positive rate. In order to show the classification performance in real application scenarios, we also recorded the precision, recall, and F1-score of all tested tools under their default score cutoffs in Fig. 5 and Table S1 in the supplementary file.The results reveal that PhaVIP and DeePVP achieve the highest precision (0.94).Meanwhile, PhaVIP has a higher recall than DeePVP. Performance on the low-similarity dataset It is usually much harder to annotate diverged proteins.As mentioned in Section 2.3, we use the Identity × Coverage as the similarity measurement and control the maximum similarity between the training and test set.We generated six PHAVIP datasets with decreasing similarity for the binary classification task and multi-class classification task separately.The F1-scores of PhaVIP and PhANNs are shown in Fig. 7 and Fig. 8.The detailed confusion matrix of the classification can be found in Table S3-S14 in the supplementary file. As expected, with the increase of the train-vs-test similarity, the F1-score of both methods increases.The gap between PhaVIP and PhANNs clearly reveals that our model competes favorably against PhANNs on a wide range of similarities in both binary and multi-class classification tasks. Case study: annotating proteins on the mycobacteriophage PDRPxv genome In this case study, we employed PhaVIP to annotate the proteins translated from mycobacteriophage PDRPxv, which is recently identified as a candidate therapy for Mycobacterium.According to [16], totally there are 107 predicted proteins in the PDRPxv genome.The authors identified 12 PVP using the mass spectrometry method and 12 non-PVP using the alignment method (BLAST).The functions of the other 83 proteins remain unknown.Because PDRPxv is not part of the RefSeq dataset, we can evaluate PhaVIP by comparing PhaVIP's predictions with the 24 annotations derived by the mass spectrometry method and BLAST. PHAVIP We used the 24 annotated proteins as input and tested the performance of the best four tools (in the benchmark experiment in Fig. 5).As shown in Table 2, PhaVIP has better performance than other tools.In addition, all the machine learning-based methods are able to predict the function of the remaining 83 proteins, demonstrating the utility of the learning-based method for PVP classification.We used the Venn diagram to visualize the relationship between the predicted PVP sets.As shown in Fig. 9, PhaVIP, VirionFinder, PhANNs identified more PVPs than DeePVP.This is consistent to the observation of DeePVP's low recall in Fig. 5.In addition, 93% of PVPs predicted by PhaVIP are also classified as PVPs by other methods, which is higher than PhANNs and VirionFinder. Using classified proteins in two important applications It is widely known that phage proteins play essential roles in taxonomy classification and host prediction.In this section, we investigate the roles of PVPs and non-PVPs in these two tasks. Phage taxonomy classification Recently, many new phages have been identified using high-throughput sequencing, especially metagenomic sequencing.vConTACT 2.0 [46] is a widely used and robust tool for phage taxonomy classification, as reported in the phage taxonomy review [47].It applies protein organization conservation for phage classification.Specifically, vConTACT 2.0 calculates the p-value that estimates the significance of two phage sequences sharing an observed number of proteins.Then, a protein-sharing network is constructed based on the p-value, and a clustering algorithm is applied to group "similar" sequences into the same cluster.Then the known labels of the reference genomes in the cluster will be passed to other sequences in the same cluster.Although vConTACT has high accuracy in classifying complete or near complete phage sequences, its running time complexity is high because of large-scale pairwise alignments.Thus, instead of using all proteins (Fig. 10 A), we PHAVIP propose to only use PVPs or non-PVPs to evaluate the similarity between phages.In particular, because PVPs have successful applications in phylogenetic tree construction, we expect that using just PVPs can achieve comparable accuracy of phage classification as using all proteins.Thus, in this experiment, we use just PVP or non-PVP when running vConTACT 2.0 and evaluate how PVP or non-PVP affects the classification results.First, we downloaded the benchmark dataset provided by [47].This dataset was constructed using 1460 RefSeq phage sequences from the latest ICTV 2022 taxonomy.It was split by time: 80% of the sequences in each family were used as the training set, and the remaining sequences were used as the test set.Second, we applied prodigal [48] to predict and translate proteins from the phage genomes in training and test sets.PhaVIP is then employed to annotate each protein.Finally, we used predicted PVPs and non-PVPs to predict the taxonomy via vConTACT, respectively.Fig. 10 B and C sketched the pipelines. Figure 11: vConTACT taxonomy classification results using different sets of proteins."Random set 1" and "Random set 2" represent randomly selected protein sets, which have the same number of proteins as PVP and non-PVP set, respectively. The taxonomy classification results in Fig. 11 show that the PVP version of vConTACT 2.0, which only used PVP for taxonomy classification, can achieve almost the same performance as the regular vConTACT 2.0.In addition, because PVP only accounts for nearly 1/5 of the total predicted proteins, using PVP for taxonomy classification can reduce the running time significantly.Because running PhaVIP only takes about seven minutes for all proteins, even with the preprocessing by PhaVIP, the total running time of taxonomy classification by vConTACT 2.0 reduces from 89 minutes to 9 minutes.Using non-PVP for taxonomy classification can also reduce the running time.But the accuracy is 3% lower than using PVP. A fair question is whether using any set of randomly chosen proteins can achieve similar accuracy with reduced running time.To answer this question, we randomly chose the same number of proteins as the PVP set and non-PVP set for taxonomy classification, respectively.In this experiment, PVP and "Random set 1" contain 7,105 proteins, and non-PVP and "Random set 2" contain 29,321 proteins.The results in Fig. 11 indicate that using a random set of proteins cannot achieve comparable accuracy as using just PVPs.In addition, vConTACT 2.0's results using "Random set 1" is worse than "Random set 2" probably because the number of proteins in "random set 2" is larger than "Random set 1". Overall, these results show that PhaVIP can help select a small subset of important proteins for taxonomy classification. Phage host prediction The hosts of the phages are mainly bacteria.Identifying the phage-host relationship helps decipher the dynamic relationship between microbes.In addition, because of the fast rise of antibiotic-resistant pathogens, phage therapy has become a potential alternative to antibiotics for killing the "superbugs" [49].Thus, predicting the phage host is important to both fundamental research and phages' applications. As reported in [50], sequence similarity can be utilized for host prediction.If two phages share similar protein organizations, they tend to infect the same host.In addition, sequence similarity between phages and bacteria may help host prediction because phages can mobilize host genes [15].Thus, we developed a host prediction pipeline based on protein similarity in order to investigate how different types of proteins affect the prediction performance.The sketch of the pipeline is shown in Fig. 12. First, we downloaded the widely used benchmark dataset for host prediction [51,50] We implemented two pipelines using phage protein and bacterial protein as the reference databases, respectively.B: the majority vote method for generating the final host prediction. alignment-based method cannot predict new labels, we only keep 423 phages in the test set that infect these 59 species for this experiment. Second, we create the reference protein databases using the predicted proteins from all the phages in the training set and their hosts.As shown in Fig. 12 A, we save the proteins from phages and their hosts in two databases, respectively.Each protein has a taxonomic label.A phage protein's label is determined by its host.A bacterial protein's label is from itself.When there is a query/test phage, we predict its proteins and annotate PVP and non-PVP using PhaVIP.Then, we align the PVP proteins to the phage and bacterial protein databases and record each PVP's best alignments against two databases, respectively.The labels of the best aligned proteins are used for host prediction.Because there are multiple proteins, we applied the majority vote as shown in Fig. 12 B. Specifically, the label with the most votes is assigned as the host of the phage.An example is given in Fig. 12 B, where three proteins were labeled as E. coli and one protein was labeled as Salmonella enterica.Thus, the final predicted host of this phage is E. coli.Because we have two different databases, we record the results using the phage database and bacterial database separately.As a control experiment, we also repeated the host prediction process using only non-PVPs and all proteins.The host prediction results at different ranks from species to family are shown in Fig. 13.The results reveal that the similarity search against the phage protein database always has better performance than against the bacterial protein database.This phenomenon is also noted by the existing host prediction tools.As reported in [50], the tools based on phage-phage similarity usually have better performance than those based on phage-bacteria similarity in the experiments.In addition, we found that non-PVP can achieve better performance in host prediction Figure 1 : Figure 1: The pipelines of PhaVIP, which consists of three major stages: FCGR protein encoding, patch embedding, and Transformer modules.When taking a test/query protein as input, PhaVIP first classifies it into PVP and non-PVP.Only the predicted PVP will be classified into more detailed annotations. Figure 2 : Figure 2: Applying CGR of to a toy sequence: CATG.Left: Division of the CGR space in the iterative process.(reproducedfrom[38]); Right: the process of determining the four pixels for CATG using CGR. Figure 3 : Figure 3: FCGR images for three sequences.(A): a random sequence.The order of vertices/amino acids is shown around the ring.(B): baseplate protein with RefSeq accession: YP_009788086.1.(C): minor capsid protein with RefSeq accession: YP_009900655.1.The green boxes and blue boxes in (B) and (C) show different patterns and the red boxes show exhibit patterns. Fig. 3 Fig.3shows FCGR images of two different phage proteins and a random amino acid sequence.The random sequence in Panel (A) is generated by randomly choosing an amino acid for 1000 times using a uniform distribution.In contrast to the random sequence, the FCGR of baseplate protein and minor capsid protein (Fig.3(B) and (C)) reveal more unique patterns.For example, the red patches in Fig.3(B) and (C) exhibit a similar pattern while the blue and red patches show highly different patterns, which may signal key sequence features that can distinguish the baseplate and minor capsid proteins.The patches indicate the distribution of short motifs ending with different amino acids.These patches and their relationships/associations with other patches can be learned by our ViT model to improve classification accuracy. Figure 4 : Figure 4: The ROC curves of the binary PVP classification by different tools.The number following the tool name is the value of the AUCROC.PVPred-SCM does not output a score for the prediction, and thus only TPR and FPR are reported. Figure 5 : Figure 5: The classification performance of the binary PVP classification under the default/suggested thresholds. Figure 6 : Figure 6: The performance of the multi-class classification.X-axis: the name of each PVP class.The order of the names is ranked by the class size.Y-axis: F1-score. Figure 7 : Figure 7: The binary classification performance on the low similarity dataset.X-axis: The maximum value of identity × coverage between train and test sets. .Y-axis: F1-score. Figure 8 : Figure 8: The multi-class classification performance on the low similarity dataset.X-axis: The maximum value of identity × coverage.Y-axis: F1-score. Figure 9 : Figure 9: The Venn diagram of the complete PVP classification results of four best machine-leanring methods on mycobacteriophage PDRPxv. Figure 10 : Figure 10: Three versions of vConTACT 2.0. A. The original design of vConTACT 2.0 uses all the proteins from the phage genome to construct the protein-sharing network.B. PVP version of vConTACT 2.0.C. non-PVP version of vConTACT 2.0. Figure 12 : Figure 12: The pipeline of using similarity search for host prediction.A: the similarity search based host prediction.We implemented two pipelines using phage protein and bacterial protein as the reference databases, respectively.B: the majority vote method for generating the final host prediction. Figure 13 : Figure 13: Host prediction results after PVP classification."(phage)" and "(bacteria)" refer to the similarity search against the phage protein and bacterial protein databases, respectively.
7,427
2023-01-29T00:00:00.000
[ "Computer Science", "Biology" ]
Nuclear structures: Twinning and modulation in crystals Crystal structure analysis is a standard technique routinely applied to single crystals as well as powders. However the process is not so straightforward if the crystal sample is affected by twinning or if the structure is modulated. In such cases the standard procedures are not directly applicable. The main purpose of this contribution is to show how to solve and refine such difficult structures. While for twinned structures the basic property of crystal – translation symmetry in three dimensional space–remains valid, for modulated crystals a special superspace theory must be exploited in order to describe the atomic structure with crystallographic methods generalized for superspace. Introduction Modern structure analysis of single crystals based on diffraction experiment is nowadays a standard discipline which allows solving and refining structures of most new crystalline materials.Data are usually acquired on laboratory X-ray diffractometers, and structure solution and refinement is made with standard program packages such SHELX [1], Olex2 [2], Crystals [3] or Jana2006 [4] in almost automatic way.Such a work can be made by a nonspecialist in crystallography because most of possible problems can be detected and corrected with help of checking program such as Platon [5].Simple structure of a well diffracting sample can be now measured, solved and refined in less than one hour and the task can be done without a knowledge what is behind these powerful tools. However, not all structures can be solved by using these standard approaches.Two effects -twinning and modulations -often make serious problems in structure solution and refinement if they are present in the crystals.These effects are especially important during phase transitions where electric and/or magnetic properties of the studied crystal may depend substantially on the phase and therefore a full description of the corresponding structural changes is very important.If these changes affect atomic positions as well as ordering of magnetic moments, such crystals must be studied by both X-ray and neutron diffraction technique.In this paper we shall concentrate on problems of solution and refinement of nuclear structure from the neutron diffraction experiment.However, most of explanations are also valid for X-ray diffraction. In spite of the fact that most of standard programs can handle twinning in the crystal, application of these tools needs much deeper understanding of crystallography.Even more complicated are modulated crystals, which miss the basic property of the classical crystal, i.e. translation symmetry.In this paper we will introduce shortly the problem of solution and refinement of difficult structures.For more details about the underlying theory, we recommend the monographs [6] and [7]. Twinning 2.1 Overlaps in twinned crystals Twinned crystals are supposed to be composed from structurally identical domains mutually related by proper or improper rotations.The number of different domains is usually not very large and in most cases there are present just two differently oriented domains.The simplest case leads to existence of two mutually rotated diffraction patterns which can be simply recognized from diffraction pattern, see Fig. 1. In this example, reflections (h,k,0) are systematically overlapped with reflections (−h, −k, 0) of the second domain.Depending on the monoclinic angle , additional (random) overlapping reflections may occur for (h, k, l) with l being close to 2na/c cos . The relationship between orientations of two domains is related by so called twinning matrix T 2 which is defined by the equation: where a i , b i and c i are cell parameters of the i th domain.The twinning matrix can be as used to express diffraction indices of the second domain in the reciprocal coordinate system of the first (reference) system: While the indices (h 2 , k 2 , l 2 ) are integers defining some reflections from the second domain, their corresponding coordinates (h 21 , k 21 , l 21 ) in the reference subsystem of the first domain need not to be integers.The twinning matrix can be used for predicting the full and partial overlaps of diffraction spots, using the distance in reciprocal space between (h 21 , k 21 , l 21 ) and the closest (h 1 , h 1 , l 1 ).The reflections are supposed to be fully overlapped if their distance is smaller than the resolution limit of the data collection and fully separated if the distance is larger than the selected limit for such separation.Reflections having partial overlaps, i.e. with their distance larger than the resolution limit but smaller than the limit for full separation should be discarded from refinement.Such a criterion is very rough and it might lead to relatively large number of deleted reflections.Moreover, the resolution and separation limits are not the same for all pairs of reflections, because they may be registered in an area detector at different geometry.For this reason, the standard data processing programs like Crysalis, Saint or Eval recognize the overlaps during data processing and encode corresponding information (i.e. which reflections are overlapped and separated) in a form of so-called hklf5 format.In such case the partially overlapped reflections are considered together in the refinement and no information needs to be deleted. The case presented in the Fig. 1 is simple: the twinning is easily recognized from the diffraction pattern, the structure is solved by standard techniques from one domain or from a detwinned data and, finally, the structure is refined using the twinning matrix or hklf5 format. Twins with full overlap of all reflections are much more difficult case since they cannot be easily recognized.In the Fig. 2 we present a simulated diffraction pattern of a tetragonal structure with point symmetry 4/m, having different portion of twin domains related by 180 • rotation along b axis.In cases (b) and (c) no indication of twinning is apparent and, moreover, the symmetry of the diffraction pattern (c) is higher (4/mmm) than the Laue symmetry of the structure.This means that presence of twinning in the crystal may obscure the correct symmetry, which can be deduced only during solution and refinement process.From Fig. 2, the necessary condition for such full overlaps is obvious: the point symmetry of the reciprocal lattice must be higher than the point symmetry of the structure. Similar effects can be present even for structures of lower symmetry (triclinic, monoclinic, . . . ) if their cell parameters correspond within the experimental accuracy to a higher lattice point symmetry.This may happen as a result phase transitions in which the original symmetry is reduced but cell parameters are almost unchanged.In the following we shall concentrate on problems of data processing, solution and refinement of twins with full overlaps. Symmetry of the twinned diffraction pattern Overlapping of diffraction spots from n twin domains can be expressed by the following formula for the combined structure factor F(h): where v i is the volume fraction of i th domain and F (h) is the structure factor.The volume fractions are normalized to 1: The Eq. ( 3) is based on the assumption that diffractions from twin domains are independent. The structure factors F (h) are coefficients in the Fourier summation of periodic density of scatterers (r): The combined structure factor is F(h) and it can be calculated from the integrated intensity I (h) of the relevant diffraction spot as for a non-twinned crystal: where S stands for the scaling factor, A for absorption correction and Lp for Lorentzpolarization factor.The full symmetry of the crystal is expressed by the space group symmetry which consists from the symmetry operations of the type: where R i and s i are rotation and translation part of the i th symmetry operation, c j is j th centering vector and n is an arbitrary unit cell translation.Then the generalized equation for symmetry condition is: From the Eqs.( 9) and ( 6) we can get: and for squared structure factors the following symmetry relationship holds: This means that the diffraction pattern follows the point group symmetry.Moreover, the fact that the nuclear density is a real function leads to the following relationship for the complex conjugated structure factor: and therefore the diffraction pattern shows contains always an inversion center even for noncentrosymmetric structures. JDN 22 As mentioned above, complete overlaps of diffraction spots of the twinned crystals are possible only if the point symmetry of the lattice H is higher than the point symmetry of the structure G. Say that the order of the subgroup G with respect to H is n.Then the point group H can be decomposed into left cosets: where the set of twinning operations {T 1 , T 2 , . . ..T n } are selected arbitrary as representatives of each coset.Without loss of generality the first twinning element can be chosen as an identity operation.Symmetry of the diffraction pattern in the case of equal volume fractions follows the lattice symmetry H.But as volume fractions need not be exactly equal we cannot a priory use during data merging all symmetry operations from H. Question is if all symmetry operations from the point group G are always present in the diffraction pattern.For the reflection hG j where G j is an arbitrary operation from G the Eq. ( 3) gives: then the symmetry requirement F 2 (hG j ) = F 2 (h) leads to set of conditions: where G l is again an operation from G.These conditions are fulfilled for a subgroup G only if it is a normal subgroup in H.In case the subgroup G is not a normal subgroup of H only operations fulfilling the Eq. ( 14) can be used in the merging procedure of symmetry equivalent reflections.Note that if the order of the subgroup G in H is two, the subgroup is always normal. The conclusion about symmetry of the diffraction pattern is that the symmetry can mimic the lattice symmetry but it can even mimic a symmetry lower than the point group of the crystal. Systematic extinctions of twins Systematic extinctions are used to determine a correct space group of the studied compound.From the Eq. ( 10) it follows that reflections fulfilling the condition hR j = h (i.e.h invariant with respect to R j ) can have non-zero structure factor only if: where n is an arbitrary integer. Example: The mirror plane with its normal parallel to the c axis has the following matrix form with respect to the crystal coordinate system a, b and c: From the space group theory it follows that in a primitive unit cell s 1 and s 2 are equal either to 0 or 1 /2 and that the third component s 3 in not restricted and depends on the origin selection. Taking into account all combinations and the Eq. ( 15), we can get extinctions as summarized in the Table 1. (s 1 , s 2, s 3, ) Symmetry operation Extinction condition Overlapping of reflections in twinned crystals can partially hide the extinction conditions.In the example shown in the Fig. 3 such an effect is demonstrated for the a-glide plane: As a result of the twinning, the absent reflections must fulfill the conditions h = 2n + 1 and k = 2n + 1, which cannot occur for a non-twinned crystal.Thus the extinctions are violated in very specific way which can help to recognize that the crystal in twinned. Solution and refinement of twinned structures Solution of crystal structures of twinned samples with completely overlapping diffraction patterns may be a difficult task.In cases where twinning is a result of a phase transition from higher to lower symmetrical phase, the known structural model of the higher symmetrical phase can be used as a starting point for the structural solution of the lower symmetrical phase.However, in many cases several different subgroups are to be tested to find correct solution. In the case that domains are not equally occupied and an approximate volume fraction is known, the observed structure factors can be corrected to get "detwinned" structure factors useful for ab-initio solution by standard methods: However, this correction cannot be used for equally or almost equally occupied domains. On the other hand, as soon as some starting structural model is known, the refinement and completing the structure is very similar to that of a regular structure.The number of parameters in the refinement is enlarged only by the (n-1) volume fractions. JDN 22 For completing the structure a Fourier maps can be used in which the observed structure factors are corrected for twinning by one of these methods: The above described methods handling twinned crystals have been implemented in the program Jana2006 and several typical practical examples were included into Jana cookbook (examples 3.1-3.4).Both can be downloaded the Jana home page: http://jana.fzu.cz/. Modulated crystals 3.1 Superspace approach for modulated structures Translational periodicity is the basic property of the classical crystal, see the Eq. ( 5).A direct consequence of this periodicity is the existence of a diffraction pattern by X-ray, neutrons or electrons with sharp diffraction spots which are localized at the points of the reciprocal lattice defined by basic vectors a * 1 , a * 2 and a * 3 : The reciprocal lattice vectors are related to translational (direct space) lattice vectors by equations: However, there are compounds which also give additional sharp diffraction spots, so-called satellites, localized out of the nodes of the reciprocal lattice.These spots can be indexed only if one or more additional (so called modulation) vectors are used: In most cases one modulation vector is sufficient for indexing of satellites, and in the following text we shall confine ourselves to the case with one modulation vector.The diffraction pattern of such modulated crystal looks like the one shown in the Fig. 4. The fact that the additional (satellite) reflections are sharp and regularly distributed in the diffraction space means that the violation of the 3d translation symmetry must be somehow regular.For handling of modulated structure a special theory of superspace and its symmetry has been developed.The development is closely connected with names Peter de Wolff, Aloysio Janner and Ted Janssen [8]. The superspace approach is based on the construction which moves artificially all satellite reflections into the 4 th dimension with a shift proportional to the satellite index -see Fig. 5.The spots are now making a lattice in the four dimensional reciprocal space described by the reciprocal vectors: The lattice character of the diffraction pattern in (3 + 1)d superspace means that the modulated structure has a translation periodicity in (3 + 1)d superspace.Thus the generalized where the lattice vectors A i fulfil the equations: as follows from orthogonality conditions between direct A i and reciprocal A * i : A i • A * j = ij In the Fig. 6 the translation symmetry is demonstrated for positionally modulated structure.There is again a unit cell which contains structural information needed to generate whole modulated structure.The superspace approach allows generalization of standard structure tools such as Fourier and Patterson syntheses, structure solution and refinement techniques. As the really observed 3d diffraction pattern is a projection from an auxilliary (3 + 1)d diffraction pattern in superspace along the vector e, corresponding 3d Fourier map which shows a nuclear density, is a section perpendicular to the vector e through a (3 + 1)d periodic superspace map.Moving this section along the vector e gives sections (so-called t-sections) of the same modulated structure which differs only by the origin shift. The (3 + 1)d Fourier maps are calculated from structure factors according to the equation analogical to the Eq. ( 6).Structure factor amplitudes are based on observed intensities (Eq.( 7)) while phases follow from the actual model.The sections, in which modulation of one atomic coordinate x 1 , x 2 or x 3 is visualized as a function of x 4 , are called de Wolff's sections.They play a crucial role in finding and completing modulation models for individual atoms in the structure. Superspace symmetry Symmetry operation Ŝ in the superspace must keep the generalized electron density invariant: A set of symmetry operations fulfilling this equation constitutes a superspace group.In analogy to 3d structures, the superspace symmetry induces diffraction symmetry, and the whole diffraction pattern including satellite reflections follows the corresponding point group.Introducing symmetry in (3 + 1)d superspace does not mean that all possible fourdimensional space groups are available for crystal symmetry.There are some restrictions which considerably reduce the number of four dimensional space groups acceptable for description of modulated structures.Any superspace operation R has the following matrix form with respect to the given crystallographic base: The left-upper part E is 3 × 3 matrix which represents proper or improper rotation in three dimensional space (known from 3d space groups).The right-upper part is 3× 1 matrix (a column) of zeros.This restriction follows from the fact that superspace is based in a mathematical construction which uses an auxilliary vector e.This vector must stay perpendicular to the real three dimensional space when the symmetry operation is applied.The right-lower block I is 1 × 1 matrix with value equal ±1.Finally, the left-lower block M is 1 × 3 matrix (row) fulfilling the equation: This means that the rotation part of the symmetry operation is fully determined from the 3d rotation part and modulation vector.The translation part s E follows from the corresponding 3d space group.The only new (compared with standard crystallography) part is s I which defines the translation component along the additional 4 th direction.Value of this component can induce systematic extinctions of satellite reflections.For example the mirror plane with its normal parallel to the c axis and I = 1 causes extinctions as described in the Table 2.In this table, the symbol used for symmetry elements consists from two parts, the first one defines the 3d symbol and the second one the additional information about s I , which in our example is 0 and s for s I equal 0 and 1/2, respectively. The superspace symbols are described in the International Tables for Crystallography (IT) vol.C [9].The symbol consists of three parts -space group symbol, specification of the modulation vector and specification of the s I components of the symmetry operations.The components of modulation vector are either restricted by the Eq. ( 27) to specific rational values or they can have general (irrational) value.Irrational components are specified by Greek letters , or .Example: P mna(0 1 /2 )s00.Information available in the IT vol.C is limited, without a list of superspace group operations which restrict modulation functions of atoms located at special positions.Fortunately the list of symmetry operations for superspace groups up to (3 + 3)d superspace as well as additional details about their standard settings is accessible from [10].The symmetrical restrictions of modulated functions expressed as a combination of harmonic waves for (3 + 1) superspace groups have been published [11] and they are automatically applied in the program Jana2006. Basic modulation types Modulations in the crystal can affect all structural parameters i.e. atomic occupancies, positions, ADP, multipole coefficients or magnetic moments.Any structural parameter p(x 4 ) due to the translation symmetry in the superspace can be expressed as a Fourier expansion: where p 0 , p s,n , p c,n are expansion coefficients which are used to describe modulation wave. The number of used terms in the Eq. ( 28) depends on complexity of the modulation in crystal which is usually connected with the maximal number of observable satellite reflections.In the following a simple modulation models and their influence to the diffraction pattern is presented. Occupational modulation The simplest occupational modulation with only one harmonic wave can be expressed like: From the kinematical theory of diffraction it can be shown that such modulation will generate only first order satellites, as visualized in the Fig. 7(a).The corresponding de Wolff's section through the (3 + 1)d Fourier map is shown in Fig. 7(b): This simple harmonic model for occupation modulation is not very common in reallife examples.Usually the diffraction pattern also contains higher order satellites and the description of modulations needs more harmonic waves.In the limiting case the modulation takes step like character: where x 0 4 and define the center and length of the x 4 interval in which the atom is present.As shown in the Fig. 8, the step like modulation leads to a high number of satellite reflections.Such a modulation is called crenel modulation [12].Description of step-like modulation by harmonic waves would require high number of parameters.Instead of them we can use a crenel function (30) which has only two parameters -the center and the width. Positional modulation The diffraction patterns for weak and strong modulation amplitude are shown in the Figs. 9 and 10.On contrary to occupational modulation here one harmonic can generate higher satellites.With growing amplitude of the modulation the order and relative intensity of satellites increases. Composite structures These structures consist of two or more modulated subsystems each having its ( A * i i = 1, 2, 3, 4 by the equation: where the Z ν ij is an integer matrix.Figure 11 shows a simple example of a composite structure made of two columns of atoms with different periodicity along the a 1 1 || a 2 1 direction, while the either parameters are identical: a 1 2 = a 2 2 and a 1 3 = a 2 3 .In this example, the subsystems are not modulated and each of them makes a diffraction pattern which can be fully indexed by three indexes.We need four indexes, however, to index these patterns together. If we choose the first subsystem as a reference subsystem, the Z matrices take form: Diffraction pattern of non-modulated composite crystal contains only main reflections of both subsystems.However, the interaction between both subsystems usually induces a mutual modulations as shown in the Fig. 12. Solution of modulated structures In many cases modulated structures can be solved in two steps.In the first step only main reflections are used to solve an average structure by standard methods.Such a structural model can show features like split atomic positions or unusually large ADP ellipsoids (see Fig. 13).Such effects can help to predict type of modulations and recognize atoms which are strongly modulated.In the second step, the modulation waves are found by refinement from the small randomly chosen displacements. However in case of strong modulations the second step or even the first step can fail because both the average structure and the initial modulation wave may be too far of the correct solution.As for the standard crystallography, refinement and Fourier methods only work for "almost finished" structures.This was the reason why the standard solution methods were generalized for modulated structures with the aim to solve them ab initio.The direct methods based on the Sayre equation were used for the development the program DIMS written by Fan Hai-fu [13,14].The heavy atom method for modulated crystals was developed by Steurer [15] and Petricek et al. [16].However, these methods are not applicable to all cases. The most promising method for ab initio solution of strongly modulated structures is so-called charge flipping [17,18].The method has been implemented into the program Superflip [19] and it is distributed together with program Jana2006.This method can solve modulated structures just in one step and starting modulation functions can be deduced directly from the output of the charge flipping, which is the nuclear density map in superspace. With help of the modern diffractometer software and charge flipping, the steps like indexing, data collection, data processing and structure solution can be done almost routinely.However, it does not mean that complete solution of modulated structures is a routine job.Difficulties start after the solution step, where e.g.interpretation of the charge flipping output requires experience, manual work and analysis of Fourier sections.This interpretation is crucial for proper selection of modulation parameters and it can be specific for different modulated structures.This selection is important not only for refinement but especially for interpretation of results where we aim to understand the reasons why the structure is modulated. Similarly as for twinned structures, the Jana cookbook contains several worked examples (5.1-5.5 and 7.1-7.3)covering typical cases of modulated structures.Moreover, everybody can participate in workshops which are offered regularly in the Jana web page for people interested in practical solution of modulated structures, as well as in bi-annual workshops organized by university in Bayreuth and in many other events focused to aperiodic crystallography. JDN 22 Figure 1 . Figure 1.Diffraction pattern of monoclinic crystal, with monoclinic angle different for 90 • , twinned by a general rotation 180 • along c axis. Figure 4 . Figure 4. (h,0,l) diffraction plane of Na 2 CO 3 .The white grid represents the lattice of the main reflections.The additional (satellite) diffraction spots are regularly displaced from regular (main) spots by a q-vector (red arrows). Figure 5 .Figure 6 . Figure 5. Introducing superspace in the reciprocal space: the vector e is perpendicular to the diffraction space R * 3 .Black spots are main reflections located at the nodes of the three-dimensional reciprocal lattice.Gray spots are satellite reflections.The new reciprocal lattice vector: A * 4 = q + e. White spots are satellites projected to the new lattice vector; the blue area is a (3 + 1)d reciprocal cell describing main reflections as well as satellites.
5,836.6
2017-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
Biomolecular corona on nanoparticles: a survey of recent literature and its implications in targeted drug delivery Achieving controlled cellular responses of nanoparticles (NP) is critical for the successful development and translation of NP-based drug delivery systems. However, precise control over the physicochemical and biological properties of NPs could become convoluted, diminished, or completely lost as a result of the adsorption of biomolecules to their surfaces. Characterization of the formation of the “biomolecular” corona has thus received increased attention due to its impact on NP and protein structure as well as its negative effect on NP-based targeted drug delivery. This review presents a concise survey of the recent literature concerning the importance of the NP-biomolecule corona and how it can be utilized to improve the in vivo efficacy of targeted delivery systems. INTRODUCTION Medical applications of nanoparticles (NPs) are wide-reaching as evidenced by their rapid development as therapeutic and diagnostic agents (Peer et al., 2007;Zhang et al., 2008;Hubbell and Langer, 2013). In particular, significant advances have been made in cancer therapy by pursuing NPs as drug delivery systems (Gu et al., 2007;Pearson et al., 2012;van der Meel et al., 2013), however, many challenges, especially with regard to achieving precise control over nano-bio interactions, still remain to be addressed (Chauhan and Jain, 2013;Pearson et al., 2014). As increasingly more complex NP formulations move toward later stages of clinical development, the need to understand and overcome those challenges is becoming imminent. One of the most important challenges affecting NP-based drug delivery is the formation of the "biomolecule" or "protein" corona (Cedervall et al., 2007). As NPs enter physiological fluids, proteins and other biomolecules such as lipids adsorb to their surfaces with various exchange rates leading to the formation of the biomolecular corona ( Figure 1A) (Nel et al., 2009;Monopoli et al., 2012;Saptarshi et al., 2013). As a consequence, the "synthetic identity" of the NP is lost and a distinct "biological identity" is acquired. This new identity governs how the NP is "seen" by cells and subsequently alters the way in which NPs interact with cells. The composition of the biomolecular corona is dynamic and is highly dependent on the initial biological environment, indicating the possibility of exposure memory (Milani et al., 2012). Opsonin adsorption such as immunoglobulin (IgG), complement, and others contribute to the deteriorated in vivo properties of NPs by promoting immune system recognition and rapid clearance from circulation. In contrast, dysopsonins such as albumin can coat NP surfaces and enhance their biological properties by reducing complement activation, increasing blood circulation time, and reducing toxicity (Peng et al., 2013). The binding of lipids and other lipoproteins to NP surfaces can alter the uptake and transport of NPs (Hellstrand et al., 2009). Taking these observations into consideration, the concept of the personalized biomolecular corona has arisen, suggesting that NP coronas should be characterized in a disease specific manner and not merely based on generalizations obtained from the literature (Hajipour et al., 2014). While biomolecule adsorption alters many physicochemical properties of the NP such as size, shape, surface composition, and aggregation state, NPs may also induce conformational changes to the secondary structure of adsorbed proteins altering their biological activities (Monopoli et al., 2012). In many cases, protein adsorption to NPs can induce fibrillation, immunosensitivity, and misfolding, substantially altering properties such as biodistribution and circulation half-life, cellular uptake, intracellular localization, tumor accumulation, and toxicity Aggarwal et al., 2009;Karmali and Simberg, 2011). Conversely, cases have demonstrated that biomolecule adsorption serves to protect the body from the toxicity of bare NPs, facilitating receptor-mediated interactions, and improving pharmacokinetic profiles, which demonstrates its potential advantages (Peng et al., 2013). Fundamental forces including electrostatic interactions, hydrogen bonding, hydrophobic interactions, and chargetransfer drive the association of biomolecules to the surface of NPs (Nel et al., 2009). A recent report by Tenzer et al., found that the biomolecular corona forms almost instantaneously (in less than 30 s) and is comprised of almost 300 different proteins, although it typically consists of a similar set of proteins in various quantities (Tenzer et al., 2013). However, it has been suggested that NPs cannot accommodate as many proteins on their surfaces and a significantly lower number of proteins are present because current analyses are performed over large numbers of NPs and represent macroscopic averages of protein composition (Monopoli et al., 2012). The "hard" corona is the first layer of the corona, consisting of tightly and nearly irreversibly bound biomolecules. Atop the hard corona lie the "soft" corona layers that are composed of more leniently associated biomolecules classified by rapid exchange rates. With increasing time, less abundant, less mobile, and higher binding affinity proteins will subsequently replace the highly abundant, lower affinity proteins (Vroman effect) (Vroman et al., 1980). However, a recent study questioned the applicability of the Vroman effect to NPs and found that the composition of the hard corona was constant over time although the total amount of adsorbed proteins was changed (Tenzer et al., 2013). Properties of NPs such as size and surface hydrophobicity have also been demonstrated to affect the composition and exchange rates of proteins such as transferrin (Tf) and albumin (Ashby et al., 2014). Although the formation of the biomolecular corona is unavoidable and plays a significant role in determining the biological behaviors of NPs, its importance has only recently received significant scientific attention. This mini review describes the importance of the NP-biomolecule corona on determining biological responses, supported by a number of recently published reports. We will succinctly cover important aspects related to biomolecular corona formation, how it is influenced by various physicochemical properties of NPs, the impact of NPs on the structure of proteins, and the impact of the biomolecular corona on the biological interactions of NPs. PHYSICOCHEMICAL PROPERTIES OF NPs AND THEIR EFFECT ON BIOMOLECULAR CORONA FORMATION The physicochemical properties of NPs determine the type of corona formed. Since the interactions between NPs and proteins occur at an interface, surface characteristics of NPs ultimately drive NP-biomolecule association. To better understand corona formation, many methods have been established Bertoli et al., 2014). Using a bioinformatics-inspired approach, Walkey et al., developed a protein corona fingerprint model that accounts for 64 different parameters to predict the cellular interactions of NPs . This model was found to be 50% more accurate than pre-existing models that only consider size, aggregation state, and surface charge. Many material properties act in concert to drive biomolecular corona formation, in this section we will focus on the effect of size, surface charge, and hydrophobicity. It is generally accepted that a positive correlation exists for NP size and protein association. For example, a two-fold increase in protein association was measured for 110 nm silver NPs (AgNPs), compared to 20 nm AgNPs (Shannahan et al., 2013). However, an inverse correlation was also reported between the amount of mouse serum protein adsorbed and the size of 5, 15, and 80 nm AuNPs (Martin et al., 2013). It was suggested that differences in curvature enabled a larger number of hydrophobic proteins to bind to the smaller NPs in this case. Recent reports have supported the correlation between surface charge of NPs and biomolecule association. Poly(vinyl alcohol)coated superparamagnetic iron oxide NPs (SPIONs) with negative and neutral surface charges adsorbed more serum proteins than dextran-coated SPIONs, leading to increased circulation times (Sakulkhu et al., 2014a). Biomolecule association to PSt NPs with different sizes (50 and 100 nm) and three different surface charges [charge neutral (plain), negatively charged (carboxylmodified), and positively charged (amine-modified)] were studied to elucidate the effect of size and surface charge of NPs on protein adsorption (Lundqvist et al., 2008). A size dependency in biomolecular corona composition was observed for both types of charged PSt NPs. For example, 100 nm negatively charged PSt NPs displayed a higher fraction of unique proteins, including Ig mu chain C region, apolipoprotein L1, and complement C1q, present in their coronas, as demonstrated by low homology in biomolecule composition compared to similar 50 nm NPs. The connection between NP hydrophobicity and protein association has been also demonstrated to be of great importance. Isothermal titration calorimetry was used to assess the stoichiometry, affinity, and enthalpy of NP-protein interactions (Cedervall et al., 2007;Lindman et al., 2007). Titration of human serum albumin into solutions of NPs comprised of different compositions of N-isopropylacrylamine (NIPAM): N-tertbutylacrylamide (BAM), it was found that more hydrophobic NPs (50:50) bound higher numbers of albumin than more hydrophilic NPs (85:15). Larger NPs bound more albumins than smaller NP counterparts. Importantly, it was also shown that apolipoprotein A-I association was 50-fold greater for 50:50 NPs than 65:35 NPs, demonstrating favorable interactions of the proteins with the hydrophobic NPs. Although correlations have been found with those properties, it should be noted that they could only act as predictive indicators of biomolecule association to NPs. This is important since the composition of biomolecules associated with NPs in vitro has been shown to be different than in vivo (Sakulkhu et al., 2014b). Nonetheless, the findings suggested that the surface properties of NP are responsible for driving biomolecule adsorption to the NP. Therefore, to further realize the potential of NPs as drug delivery vehicles, it is critical to coat their surface with a non-fouling layer, e.g., poly(ethylene glycol) (PEG), polyoxazoline, poly(vinyl alcohol), or polyglycerol, to minimize biomolecule association and therefore achieve more controllable cellular responses (Owens and Peppas, 2006;Romberg et al., 2008;Amoozgar and Yeo, 2012). IMPACT OF PEG LAYERS ON BIOMOLECULAR CORONA FORMATION Modification of the surface of NPs with a layer of PEG, or PEGylation, is known to reduce opsonization and enhance blood circulation time of NPs by providing a "stealth" effect, i.e., invisible to immune cell recognition (Owens and Peppas, 2006). Recently, a number of studies have been reported to characterize the role of the PEG conformation (i.e., brush or mushroom) and its impact on biomolecular corona formation. The effect of PEG density on corona formation has been evaluated on numerous occasions. For example, NPs prepared from the particle replication in non-wetting templates (PRINT) method were prepared with two different PEG densities corresponding to the brush (0.083 PEG/nm 2 ) and mushroom (0.028 PEG/nm 2 ) regimes (Perry et al., 2012). Brush NPs displayed lower binding of bovine serum albumin (BSA) by nearly three-fold and four-fold less than non-PEGylated NPs. Significant differences between NPs with the two PEG conformations in terms of diminished macrophage uptake or increased circulation half-lives were not directly measured, but brush NPs performed better than mushroom NPs on average. At constant size, a similar result was obtained using AuNPs, where an increase in PEG grafting densities resulted in decreased serum protein adsorption (Walkey et al., 2011). In contrast, distinct differences were observed in terms of protein adsorption when size was considered. The same study found an inverse correlation between particle size and protein adsorption. The increased protein binding onto the smaller NPs was attributed to higher surface curvature and lower PEG-PEG steric interactions, which allowed a greater amount of the bare surface of the AuNP exposed ( Figure 1B) (Walkey et al., 2011). When macrophage uptake was considered, two trends were observed. First, increased PEG density on similarly sized NPs resulted in decreased uptake. Second, at similar PEG densities, smaller NPs were taken up to a lesser extent than larger ones. Contrary to those results, in a study using PEGylated singlewalled carbon nanotubes (SWCNT), brush SWCNTs were found to display shortened blood circulation times, faster renal clearance, and increased spleen vs. liver uptake, compared to mushroom SWCNTs (Sacchetti et al., 2013). Although these studies presented contrasting results with regard to PEG conformation, it is clear that the presence of PEG minimized biomolecular corona formation that was translated to enhanced pharmacokinetics of various NPs. However, to distinctly determine the role of PEG and PEG density in NP formulations, it is necessary to verify the biological properties of NPs in a case-by-case manner to obtain the desired response. CONFORMATIONAL CHANGES OF ADSORBED PROTEINS CAUSED BY NPs Achieving control over the toxicity of NPs is critical to ensure their optimal therapeutic effects. When a NP enters the body, it can alter the proteins that form its protein corona, and therefore induce toxicity during therapy. Some of these changes include alterations in protein conformation, protein function, and defective transport leading to the overexpression of inflammatory factors (Baugh and Donnelly, 2003;Wolfram et al., 2014a). Many physicochemical properties of NPs affect protein adsorption, which influences how NPs interact with cells and tissues. The proteins adsorbed on the surface of the NPs can still be recognized as the native proteins by an interacting cell, and as a result, these denatured or misfolded proteins can trigger inappropriate cellular processes (Lynch et al., 2006). In a study investigating protein stability using silica NPs, conformational changes in protein variants of carbonic anhydrase II on NP surfaces occurred in a step-wise manner, where the least stable variants exhibited the quickest misfolding kinetics (Karlsson et al., 2000). When exposed to NPs for longer periods of time, all variants eventually folded into the same unstable state. www.frontiersin.org November 2014 | Volume 2 | Article 108 | 3 A number of studies have been dedicated to characterizing the interaction of albumin with NPs. AuNPs, for example, modified albumin from its stable secondary conformation to its unstable tertiary conformation (Shang et al., 2007). In the case of charged-PSt NPs, it was observed that albumin maintained its native secondary structure while associated with carboxylated NPs enabling its interaction with the albumin receptor. However, amine-terminated NPs denatured albumin and subsequently led to a loss of specificity toward the albumin receptor in favor for scavenger receptors, indicating that the transition to unstable proteins alters their activity in the body (Fleischer and Payne, 2014). This illustrated that the misfolding of proteins can result in an alteration of the cell surface receptors targeted by NPs, which could decrease their targeting efficacy (Fleischer and Payne, 2014). Mortimer et al., investigated the role of scavenger receptors in NP-protein interactions (Mortimer et al., 2014). Albumin binding to synthetic layered silicate NPs (LSNs) induced protein unfolding akin to heat denaturation of albumin. Class A scavenger receptors, which are the dominant receptors involved in the mononuclear phagocyte system (MPS), required the presence of the albumin corona to recognize the LSNs. The conformational changes of albumin can not only lead to increased NP clearance, but also alter their cellular uptake. A study characterized the biomolecular corona of negatively charged disulfide-stabilized poly (methacrylic acid) nanoporous polymer particles (PMA SH NPPs) following incubation in complete media containing 10% fetal bovine serum (FBS). Adsorption of BSA, a major component in FBS, onto the surface of the NPPs was found to result in a conformational change from its native state. Notably, denatured BSA on NPPs caused a reduction in the internalization efficiency of the NPs into human monocytic cells, compared to the bare particles, due to reduced cell membrane adhesion. However, a different conformation of BSA, triggered class A scavenger receptor-mediated phagocytosis in differentiated macrophage-like cells (dTHP-1) without a significant impact on the overall degree of cell internalization. Recognizing that both composition and orientation of the protein corona are important for the assessment of biological interactions may lead to the prevention of off target cellular interactions of NPs (Yan et al., 2013). CONSIDERATIONS OF THE BIOMOLECULAR CORONA FOR NP-BASED TARGETED DRUG DELIVERY NP interactions with biomolecules can significantly affect the efficacies of nanomedicine. Alterations in conformations or activities of biomolecules can dramatically impair NP-based drug delivery. These alterations may result in changes in cellular uptake, drug release, and biodistribution profiles. Importantly, new methods to study those NP-cell interactions at the molecular level will yield insight into how the biomolecule corona can alter the fate of NPs (Bertoli et al., 2014). In Table 1, we have summarized the major considerations one must take when designing and evaluating targeted NP drug delivery systems to achieve optimum efficacy. Size is an important property of NPs that affects their distribution within the body. Biomolecular corona formation can increase the original size and alter the pharmacokinetics of NPs (Lundqvist et al., 2008). In some cases, this size increase could be beneficial since NPs smaller than 5 nm are readily excreted through renal filtration (Choi et al., 2007;Sunoqrot et al., 2014). Yet, the size increase caused by biomolecule adsorption may result in a PHYSICOCHEMICAL PROPERTIES OF NPs AND THEIR EFFECT ON BIOMOLECULAR CORONA FORMATION Size Larger NPs adsorb more proteins to their surfaces Shannahan et al., 2013 Surface charge Charged NPs adsorb more proteins to their surfaces. Alteration of particle zeta potential Lundqvist et al., 2008 Hydrophobicity More hydrophobic NPs adsorb more proteins to their surfaces. Cedervall et al., 2007;Lindman et al., 2007 IMPACT OF PEG LAYERS ON BIOMOLECULAR CORONA FORMATION High density brush PEG conformation adsorbed less protein than mushroom conformations Walkey et al., 2011;Perry et al., 2012 CONFORMATIONAL CHANGES OF ADSORBED PROTEINS CAUSED BY NPs Results in protein misfolding (changes in secondary structure) Karlsson et al., 2000;Shang et al., 2007;Fleischer and Payne, 2014 decreased therapeutic efficacy of NPs for diseases such as pancreatic cancer that require nanotherapies with particles sizes smaller than 50 nm (Cabral et al., 2011). Considering those changes caused by the biomolecular corona, it appears essential to characterize the therapeutic and targeting efficacies of NPs under relevant conditions. Silicon dioxide (SiO 2 ) NPs were functionalized with Tf to validate their ability to maintain targeted interactions in physiologically relevant cell culture conditions. In FBS-containing medium, Tf-functionalized NPs lost their ability to selectively target A549 lung cancer cells (Figures 1C,D) (Salvati et al., 2013). Mirshafiee et al.,prepared 75 nm SiO 2 NPs and studied their ability to react with synthetic, surface-bound azide groups using copper-free click chemistry (Mirshafiee et al., 2013). The results of this study confirmed that the biomolecular corona creates a barrier that screens the interaction of the ligand and its target on a separate surface. While NP characteristics, such as size, shape, and surface charge, change due to biomolecular corona formation, drug release kinetics from the NPs can either be enhanced or disrupted. Liposomes can undergo shrinkage due to osmotic forces and may undergo a burst-release effect upon entering the blood, resulting in rapid drug release (Wolfram et al., 2014b). In contrast, protein binding on NPs has been shown to delay drug release, which prevented drug diffusion through the NP matrix (Paula et al., 2013) and reduced the burst effect (Behzadi et al., 2014). The biomolecular corona may alter the toxicity profiles of NPs in a positive manner as well. Evidence has accumulated that the biomolecular corona may mitigate NP-induced toxicities. Decreased negative cellular impacts of carbon nanotubes were observed when they were coated with plasma proteins. Nanotubes with a higher protein density displayed less toxicity than those with a lower protein density (Ge et al., 2011). The effect of the biomolecular corona of 22 nm silica NPs with different surface charges on toxicity was also evaluated. The corona formed on each NP was confirmed to be unique, and SiO 2 -COOH NPs exhibited lower toxicity than bare SiO 2 and SiO 2 -NH 2 (Mortensen et al., 2013). These results indicated that NP-protein interactions can be utilized to reduce toxicities of some NPs that are otherwise known to be toxic to biological systems. CONCLUSIONS AND FUTURE DIRECTIONS The biomolecular corona has been demonstrated to have a major impact on the biological behaviors of NPs. Physicochemical properties of NPs including size, surface charge, and hydrophobicity affect the relative amounts, types, and conformations of proteins that adsorb onto the NP. NPs functionalized with disease-specific targeting ligands are positioned to revolutionize the treatment of debilitating diseases such as cancer by achieving targeted and selective cellular interactions. However, the biomolecular corona diminishes those cellular interactions by making the ligands inaccessible at their surfaces. Therefore, development of strategies to overcome the negative impact of the protein corona on NP targeting is necessary. Recently, attaching targeting ligands to longer PEG tethers in combination with backfilling of the remaining bare surface with short PEG chains has been shown to promote the formation of targeted interactions in vitro (Dai et al., 2014). It is seemingly obvious that characterization and biological evaluations NPs must be performed in the presence of physiologically relevant protein levels, which will ultimately result in the enhanced in vivo efficacy of targeted drug delivery platforms.
4,713.8
2014-11-27T00:00:00.000
[ "Medicine", "Materials Science", "Chemistry" ]
A Nonperturbative Proof of Dijkgraaf-Vafa Conjecture In this note we exactly compute the gaugino condensation of an arbitrary four dimensional N=1 supersymmetric gauge theory in confining phase, using the localization technique. This result gives a nonperturbative proof of the Dijkgraaf-Vafa conjecture. Introduction and summary Analytic computations in quantum field theories are important, but very hard, in general. Important quantum field theories in which we can compute some quantities exactly are supersymmetric (SUSY) field theories. The localization technique for SUSY field theories, originated in [1] [2], is a general way to compute them exactly. Recently, this technique is applied to various kinds of SUSY field theories (for examples, [3]- [21]), after the important work by Pestun [22]. It should be stressed that using the localization technique we can compute non-topological quantities. In particular, using it, we can compute the gaugino condensation in four dimensional N = 1 SUSY Yang-Mills theories in confining phase [23]. 2 In this paper, we will compute the gaugino condensation of four dimensional N = 1 SUSY gauge theories with general chiral multiplets and a superpotential in confining phase. In order to do this, we first integrate out the chiral multiplets, while keeping the vector multiplets. This is consistent because we can deform the theory without changing the gaugino condensation (i.e. using the localization technique) such that the theory is arbitrary weak coupling [23]. 3 This integration of the chiral multiplets can be done perturbatively and we only need the effective superpotential. Thus, this can be done by the methods used in [31] and [32]. After the integration of the chiral multiplets, we have N = 1 SUSY gauge theories with only vector multiplets and a superpotential. As shown in [32], the superpotential is a function of the gaugino bi-linear S only (and the coupling constants in the original superpotential). For this theory, we will compute the gaugino condensation. Therefore, we can compute the gaugino condensation in four dimensional N = 1 SUSY gauge theories with general chiral multiplets and a superpotential. The Dijkgraaf-Vafa conjecture is that the glueball superpotential of the N = 2 SUSY 2 The gaugino condensation were computed various ways, see [24]- [30]. 3 Here, we consider the theory on R 3 × S 1 R and then taking the R → ∞ limit. (The gaugino condensation does not depend on R.) With the non-trivial v.e.v. of the Wilson line around S 1 R , the symmetry breaking will occur at the very high scale compared with the scaleΛ which is the effective dynamical scale determined by the deformed action. Thus, the computations reduced to 3d Abelian theory in the low energy region much below the 1/R. Withtout this infra-red cut-off R, the deformed action will not be weak coupling because of the low energy modes below theΛ which remain strong coupling. U(N) gauge theory deformed by a superpotential is computed by a corresponding matrix model [33]. There are "proofs" of this conjecture, i.e. [31] and [32], however, both in [31] and [32] perturbative integrations of chiral multiplets were computed and the nonperturbative dynamics of the gauge fields were (implicitly) assumed to be just adding the Veneziano-Yankielowicz superpotential to the non-trivial superpotential which was obtained by the integration of the chiral multiplets. 4 In this paper, we show that the perturbative superpotential with the Veneziano-Yankielowicz superpotential indeed gives the correct gaugino condensation. This can be regarded as a nonperturbative proof of the Dijkgraaf-Vafa conjecture. 5 It should be noted that we can compute the gaugino condensation for any four dimensional N = 1 SUSY gauge theories (with a Lagrangian and in confining phase) according to the discussions in this paper. 6 It would be interesting to study applications of our method. We hope to return this problems in future. The organization of this paper is as follows: In section 2 we compute the gaugino condensation for four dimensional N = 1 SUSY gauge theory with only vector multiplets and a generic action. In section 3 we show that our results imply the nonperturbative proof of the Dijkgraaf-Vafa conjecture. Gaugino condensation in theory with a generic superpotential In this section, we will consider four dimensional N = 1 SUSY gauge theory with vector multiplets only (no chiral multiplets) on R 3 × S 1 R with a simple gauge group G and the following superpotential: where τ 0 and g i are complex constants, S(y, θ) = S 0 (y) + θS 1 (y) + θθS 2 (y) is the glueball superfield whose lowest component is the gaugino bilinear S 0 ∼ Tr(λλ) and F (S, g i ) is a 4 More precisely, in [32] using the generalized Konishi anomaly equation to the 1PI effective action written by S we can justify the addition of the Veneziano-Yankielowicz superpotential. However, as stressed in [32] this only works for the case without symmetry breaking because there are no coupling constants to S i where S = i S i . We thank Y. Nakayama for the useful discussions on this point. In [32] it was also noted that the generalized Konishi anomaly would have the higher loop corrections. 5 In [34], the gaugino condensation was computed in the way, which is different from ours and is related to the N = 2 Seiberg-Witten theory. The close connection of the gaugino condensation to Seiberg-Witten theory was also discussed in [35]. In [36], an off shell extension of the vacuum expectation value was used to compute the gaugino condensation and it was claimed to give a non-perturbative proof of the Dijkgraaf-Vafa conjecture. 6 The perturbative superpotential should be computed using the results in [31] and [32] for general N = 1 gauge theories, for examples, [37] [38], however, it would be difficult to have the superpotential in an explicit closed form. function of S and coupling constants g i . Here we do not assume the Kähler potential is canonical. Note that terms containing Tr((λλ) n ) with n > 1 and terms with derivatives are regarded as the Kähler potential [32]. Thus, this superpotential represents a general superpotential for a theory without chiral multiplets. Note that the polynomials of S in the superpotential are composite operators and should be defined with a regularization, for example, a point splitting. Thus, the classical constraints are not imposed on these composites. We will compute the effective superpotential for this theory and determine the vacua and then compute the gaugino condensation Using the result of [23], we can compute them in the weak Yang-Mills coupling constant by the localization technique. In a superspace, this is realized by adding to the anti-superpotential with t ≫ ∞. The dynamical scaleΛ of the theory with the additional term (2.3) can be arbitrary low. With this deformed action, what we need to do is only semi-classical computations around the anti-self dual (ASD) connections. Note that the radius R of S 1 R plays as a infra-red regulator, which makes the deformed theory indeed weak coupling [23]. As in [23], in order to determine the vacua and evaluate the gaugino condensation, we would like to compute the expectation value of the gaugino bi-linear: where · · · 0 means the expectation value with the gauge coupling τ 0 and a Kähler potential without the superpotential F . We also used δ 1 , δ 2 which are SUSY transformations, corresponding to θ 1 , θ 2 . In order to evaluate this, it seems to have to consider ASD connections with many fermion zeromodes because the superpotential term F includes the fermions. To do this explicitly is interesting, however, in this paper we will use a different method. First, we introduceS(x) as a constant shiftS of S(x): and define G from F by subtracting the zeroth and linear term inS: Then, in terms ofS 0 (x), we can express X as where we left F (S)| S=S term, which vanishes for constantS, for later convenience. This term will be relevant if we regardS as a background constant chiral superfield. Note that the action of (· · · ) S contains only linear term inS. Now we take the constantS to satisfȳ which means S (x) S = 0. We will see later that thisS indeed gives the gaugino condensation, i.e.S = S(x) . The condition (2.9) will be a self-consistent equation. 7 Then we expand e d 4 yδ 1 δ 2 G(S 0 (y),g i ,S 0 ) in (2.7) in terms ofS 0 . It will be a linear combination of 1 and where C n = C n (S 0 , g i ) is determined by G. We can easliy see that I n satisfiesδ i I n = 0, becauseδ i S 0 (x) = 0 and [δ i , δ 1 δ 2 ] is a space derivative. Hereδ i is the SUSY transformation corresponding toθ i . Furthermore, we can show I n = d 4 x δ 1 δ 2 (C n (S 0 (x)) n−1 e ia µ ∂µ S 0 (x) ) +δ 1 δ 1 δ 2 (· · · ) +δ 2 δ 1 δ 2 (· · · ) . (2.11) This follows from ∂ ∂aν e ia µ ∂µ S 0 (x) = i(δσ µ δ)e ia µ ∂µ S 0 (x) which means e ia µ ∂µ S 0 (x) = S 0 (x) + (δσ µ δ)(· · · ). Therefore, for theδ i -closed correlators, which we are considering, we can replace where a j is an arbitrary constant. Note that we can do this replacement for each I n in a product of I n s in the expansion of the exponential with different a j for each I n . Then, X will be written by a linear combination of the following form: where m α ≥ 2. Now we will take a following "large separations" limit: |a α j | → ∞ with |a α j − a β k | → ∞ ((j, α) = (k, β)) and |a α j − b| → ∞. Then, we will use the clustering properties to factorize the correlator for each S (x α + a α i ) if |x α + a α i | is not close to any other insertion points. Here we can see that S 0 (x α + a α i ) S = 0, δ iS0 (x α + a α i ) S = 0 and δ 1 δ 2S0 (x α + a α i ) S = 0 by the definition ofS, i.e. S (x) S = 0. Furthermore, the number of the points of the integration is M, which is strictly smaller than the number of a α i because m α ≥ 2. Therefore, there is at least an isolated insertion of an operator which makes (2.13) vanishes and we find from which we can evaluate the superpotential, the vacua and the gaugino condensation as in [39,23]. Note that this also implies S 0 (x) = S 0 (x) S . As we can see from (2.8), the · · · S is just replacing the coupling constant τ 0 tõ Thus, the superpotential and the vacua 8 are found as in [39,40,23] by the semi-classical computations around the fundamental monopoles which have two fermion zeromodes. Note that the 1-loop factor in the localization technique only contributes to the Kähler potential [41]. More precisely, the definition of (· · · ) S , (2.8), is the path-integral with the superpo-tentialW Thus, the effective superpotential is where c 2 is the dual coxeter number of G and for example, e(SU(N c )) = 1, and theΛ is the dynamical scale in the 1-loop Pauli-Villars regularization which is defined bỹ In order to find the vacua, we need to evaluate ∂W ef f (X, τ 0 )/∂X = 0, where X is the (would-be) moduli. With this and ∂(W ef f (X, τ 0 ) − f (S))/∂τ 0 =S(X, τ 0 ), we see that is a solution. Thus, we can think thatS does not depend on X. where g 2 (µ) comes from the path-integral measure which is defined with the coupling constant τ 0 . Now we will consider the gaugino condensation S (=S). We have seen that the original superpotential W V can be written as then, the effective potential should give where we have used ∂ ∂τ G = 0 which follows from (polynomials ofS) = 0. Thus, the gaugino condensationS can be computed using The superpotential is evaluated to whereS is determined by (2.25). We will also define a dynamical scale Λ in the 1-loop Pauli-Villars regularization of the coupling constant τ 0 , The relation between Λ andΛ is given bỹ where we have used (2.15) andS =S(Λ) was given by (2.25). Now we see that the following glueball superpotential reproduces the gaugino condensation and the effective superpotential: where we can think W S (S, Λ) as a function of S andΛ by using the relationΛ =Λ(Λ). Indeed, we find which is equivalent to S = e(G)wΛ 3 e 1 c 2 ∂F (S) ∂S . The superpotential W S is evaluated with (2.30) to W S → c 2 S + F (S) − S ∂F (S) ∂S , which is the correct one. Therefore, the glueball superpotential is the (2.29) which is just a sum of the Veneziano-Yankielowicz superpotential and the F (S). We can easily generalize the results to the theory with a semi-simple gauge group. Let us consider a 4d N = 1 SUSY gauge theory of only vector multiplets with a gauge group G = ⊗ a G a is semi-simple and a superpotential Following the previous discussions, we can easily see that ∂Sa . Here, for U(1) gauge group, there is no dynamically generated superpotential and S a = 0. A proof of Dijkgraaf-Vafa conjecture Let us consider 4d N = 1 gauge theory with gauge group G and chiral multiplets couple to G. With a generic tree level superpotential where g i is the coupling constants 9 and Φ a is the chiral superfields, the theory is expected to be in a confining phase 10 and gaugino condensation is non-trivial, which we will compute. We will compute the correlation functions of the operators insertions which satisfȳ δ i (O) = 0. Thus, we can add the regularization term of the localization for the vector multiplets (2.3) [23]. Then, the theory is effectively in weak coupling and the effective dynamical scale can be set to arbitrary low. We can also add a large kinetic terms for the chiral multiplets. 11 Then, we can integrate out the chiral multiplets perturbatively, where the vector multiplets are regarded as a background because the effective gauge coupling constant is very small by taking t → ∞. Here 9 If the low energy theory is a non-trivial conformal fixed point, we will add an arbitrary small perturbation to the coupling constants or a small deformation of the vacuum we choose. 10 With the chiral multiplets, the Wilson loop will not behave the are law. Thus, precisely speaking, the phase will not be a confining phase, but a phase with a mass gap with possible free U (1) factors. For simplicity, we will call it confining phase. 11 The kinetic terms for the chiral superfields are written by the Kähler potential. The regularization term (2.3) for the vector multiplet is the anti-holomorphic superpotential. Both of them do not affect the effective superpotential and the correlation function of the operators in the chiral rings. we expand the bosonic fields in the chiral multiplets around the classical vacua. Note that the 1-loop computaiton is exact in the usual localization technique where we take the t → ∞ limit with the regularization term tδV and rescaling of the fields. In our case, the kinetic terms of the chiral multiplets contain the vector multiplets which is regarded as background fields. Thus the saddle points of the large kinetic terms are non-trivial and integrations over the saddle points with the superpotential give a non-trivial effective superpotential. It will be interesting exactly follow this line and find the effective superpotential which should be a matrix model computation because the saddle points are essentially the zero modes of the chiral multiplets. On the other hand, in [31] the perturbative computation of the chiral multiplets with the vector multiplet background was done by deforming the anti-superpotential appropriate way. Furthermore, in [32] it was shown that the effective superpotential obtained by integrating out the chiral superfields can be determined by the generalized Konishi anomaly. Thus, in this paper, we assume that the integrating out the chiral multiplets is done by those methods. Here, the chiral multiplets with classical superpotential can have a non-trivial moduli space of vacua. We have discrete set of vacua with a generic superpotential, although, the moduli space need not to be discrete. Here we assume the moduli space is discrete by giving a small deformation of superpotential, for example a mass term, if it is needed. Then, we redefine the chiral superfields as Φ ′ a = Φ a −Φ a whereΦ a is the value of Φ a at the classical vacuum we have chosen. The perturbative calculation is done around this. Depending on the choice of the classical vacuum, the original gauge group G will be broken to a semi-simple gauge group with U(1) factors, which we will denote G ′ . The glueball superfields S a are possible to be defined for this setting because the gauge symmetry is broken at very high energy scale compared to the effective dynamical scale of the gauge theory which is lowered by the regulator term. 12 In terms of S a , we have the effective superpotential W V , (2.31), with τ a 0 = τ 0 for any a after integration of the chiral superfields. Then, applying the discussion in the previous section to the effective action (2.31), we conclude that the effective superpotential from which we can compute the gaugino condensation S = a S a is given by just adding the Veneziano-Yankielowicz superpotentials for all simple gauge groups in G ′ to the effective action (2.31). In particular, if we consider a chiral multiplet of the adjoint representation of G, it was shown in [31] and [32] that the perturbative effective superpotential for the chiral multiplet is equivalent to the one of the matrix model of the Dijkgraaf and Vafa. Then, the pathintegral of the vector multiplets gives just the Veneziano-Yankielowicz terms according to the discussion in the previous section. The final effective action (2.31) is the one conjectured in [33] . Therefore, this can be regarded as a nonperturbative proof of the Dijkgraaf-Vafa conjecture. 13
4,390.4
2015-09-09T00:00:00.000
[ "Physics" ]
Absorption of Microwaves During Plasma Heating at the Second Harmonic of Electron Cyclotron Resonance in Tokamaks and Stellarators: Linear Theory and Experiment We study the microwave absorption during electron cyclotron resonance heating (ECRH) by the extraordinary wave at second harmonic (X2 mode) in the T-10 tokamak and TJ-II stellarator in a wide range of plasma densities, and compare experiments with the classical formulas for the absorption of the injected ECR power. Empirical relations for the absorption efficiency and for the critical plasma density \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{n}_{{{\text{cr}}}}}$$\end{document}, which separates the regions with full and partial absorption of the injected ECR power, are obtained using the numerical simulation of the heat transport with the transport model of canonical profiles. It is shown that in both devices, the range of densities exists, where the absorption predicted by the classical formulas is almost full, while according with the empirical formula, only a small fraction of the power is absorbed. The obtained relations allow ones to optimize the conditions of ECRH in toroidal systems for magnetic plasma confinement. INTRODUCTION Microwave heating on harmonics of electron cyclotron resonance (ECRH) is one of the most effective methods for additional plasma heating in modern fusion devices with magnetic confinement-tokamaks and stellarators.However, it is known that in the nonrelativistic formulation of the equations, microwaves with purely transverse propagation are not absorbed.B.A. Trubnikov in his pioneering work [1] determined the elements of the plasma permittivity tensor in the relativistic formulation.Then it was shown that in the weakly relativistic approximation, the full absorption is realized only for the ordinary mode at the fundamental cyclotron harmonic (O1 mode) and for the extraordinary mode at the second harmonic (X2 mode), while for other modes the absorption is small [2].Subsequently, these results were generalized to microwaves with arbitrary direction of propagation [3].Further progress in this problem is described in the reviews [4][5][6][7]. Along with heating of the plasma electrons in tokamaks and stellarators, microwaves can excite or change various types of plasma oscillations, which, in turn, can affect to anomalous transport.Knowledge of the absorbed ECRH power is necessary to establish the properties of these oscillations and to find scalings for their description [14][15][16], as well as for quantitative modeling of the transport. In [17][18][19][20], the absorption of the X2 mode, propagating perpendicular to the magnetic field, is discussed on the base of experiments in the T-10 tokamak and TJ-II stellarator [19,21].General principles of equivalence between discharges in tokamaks and stellarators have been proposed.For a pair of equivalent discharges, the equality of their electron and ion temperatures was shown [19,21].The common properties of the energy transport in tokamaks and stellarators were established.cr n PLASMA, HYDRO-AND GAS DYNAMICS In this paper, we analyze the generally accepted theoretical formula for the plasma optical thickness upon absorption of the X2 mode and compare it with results of the T-10 experiments on heating at the first and second ECR harmonics.From the analysis of these experiments by the Canonical Profile Transport Model (CPTM), an empirical formula for the absorbed microwave power is derived, which is compared with the theoretical formula, and the efficiency of ECRH in the T-10 tokamak and TJ-II stellarator is estimated.Next, we discuss the possibility of transition to full absorption of microwaves.In conclusion, we summarize the main results. PLASMA OPTICAL THICKNESS In the above papers, microwave absorption is usually characterized within the geometric optic approximation in terms of the plasma optical thickness, defined as (1) where the integration is performed over the resonant region of the ray path, s is a coordinate along the ray, Im is the imaginary part of the wave vector determined by the solution of the dispersion equation (2) where D is a plasma dielectric permittivity tensor.The fraction of absorbed power is determined by the expression (3) Here, we restrict ourselves to the case of the X2 mode propagation across the magnetic field.We consider an approximate expression for the plasma optical thickness from [4]: (4) Here, is the squared plasma frequency, is the electron cyclotron frequency.The values of the plasma density and magnetic field in integration of (1) are taken in the resonant zone.Away from the cutoff density, the parameter is varied in the range from 1.1 to 1.3. Let us rewrite Eq. (4) in practical units: the density n in [10 19 m −3 ], the electron temperature in keV, and the magnetic field B in Teslas.Thus, [keV]/511, and (4) has a form (5) nT R B Here, we normalized the major radius of a tokamak R to the major radius of the T-10 device ( 1.5 m).We set that gives the accuracy of the optical thickness estimation as 10-30%. ANALYSIS OF EXPERIMENTAL RESULTS The microwave absorption at the first and second ECR harmonics in the T-10 tokamak is considered in [17,18].Quantitative analysis of the ECRH using CPTM led to the following conclusions [18][19][20].Although the optical thickness at the fixed magnetic field depends not only on the density, but also on the electron temperature, so, at sufficiently high average density n, the full absorption of microwaves takes place.At sufficiently low density, absorption may become partial.We call the boundary of the transition from partial to full absorption as the critical density . So, if (6) the absorption is full, independently of the electron temperature.Otherwise, if (7) the absorption depends on the value of the partial electron pressure .If the condition ( 7) is satisfied, but if the electron pressure is sufficiently high: (8) then the full absorption of the microwave is preserved, since the optical thickness increases due to the temperature.If the condition ( 7) is fulfilled and the electron pressure is low: (9) then the absorption remains partial. Results of analysis from [18][19][20][21] are collected in Fig. 1.Here, on the plane (density-electron temperature), shots with ECRH at the fundamental O mode (O1) and at the second harmonic X mode (X2) are shown.Under ECRH at the first harmonic [22], in this area, the temperature rapidly increases with decreasing density.A sharp difference in temperatures is seen, when the low-density plasma, m -3 , is heated.Under ECRH at the second harmonic, the temperature also increases, but very slowly.As a result, at the density of , the temperatures differ in a factor of 4-5.We suppose that such a difference owing to partial absorption of the X2 mode. The blue line in Fig. 1 marks the boundary between areas of full and partial absorption (8).It describes the empirical condition of full absorption obtained for T-10 (see condition (5) from [19]): where and are the central electron density and temperature.A more detailed analysis of the experimental data shows that the factor 10 in relation (10) should be reduced to 8.8, and the central value of the plasma density may be replaced by the average value without large errors.Thus, for the boundary of the full absorption area, we obtain the dependence shown by the blue line in Fig. 1: (11) Two series of shots with different densities and different ECRH powers of 1.1 and 0.55 MW (circles and dash-dotted lines in Fig. 1) are very interesting as well.We see that the electron temperature in these series increases very slightly with decreasing density.It confirms our assumption that the efficiency of microwave absorption decreases with decreasing density.Note that the position of the experimental points at the power of 1.1 MW for this series practically coincides with the upper limit for the electron temperatures reached in experiments with heating power of 0.4 to 2.3 MW (see Fig. 5 in [19]), marked by the red line in Fig. 1.This occurs because the experimental series with a power of 1.1 MW were carried out in shots with a carbon limiter.Later, when the power of the gyrotrons at the device reached 2.3 MW, the carbon limiter was replaced by a tungsten limiter.With such a limiter, the electron temperature turned out to be lower than with the carbon limiter.Recall that in the 1980s, when heating from gyrotrons was carried out at the first ECR harmonic, the limiter was carbon also. So, Fig. 1 demonstrates the effect of reducing microwave absorption with decreasing density, both in regimes with different heating power, and in regimes with different limiter materials. Figure 1 shows that at the average density n = m -3 , the additional plasma heating by 1.5 keV allows us to increase the optical plasma density so much that absorption becomes full.For this, we may use some other method of additional heating, for example, neutral beam injection (NBI).As a result, the electron temperature can be high enough to satisfy the condition of full absorption (8).After that, additional NBI heating can be turned off, but the plasma will remain in the area of full absorption.Of course, the considered process is highly non-linear, since the transition through the blue curve is accompanied by transition from partial to full absorption of waves.In the area below the blue curve, absorption depends linearly on density [19], while in the area above the blue curve, absorption is full and independent of density. Analysis of experimental data using CPTM [17][18][19][20][21] shows that the absorbed power fraction in case ( 7), ( 9) increases linearly with plasma density.Here, is the absorbed power, is the injected power.Thus, the complete refined empirical formula for the absorbed power fraction is: (12) The empirical relation for the critical density is [19]: (13) Relation ( 13) links the boundary of the transition from full to partial absorption with the magnetic field.For T-10 with a field T, the critical density is about 2.8 × 10 19 m -3 .In [21], we studied the absorption of X2 mode at the second harmonic in the TJ-II stellarator, where the critical density is 10 19 m -3 for the field T. Note that relations (8) and ( 9) are valid for the toroidally symmetric systems (tokamaks), but not always valid for stellarators with different topology of , where the vector s is parallel to the ray of microwaves. COMPARISON OF EMPIRICAL AND CLASSICAL EXPRESSIONS FOR THE OPTICAL THICKNESS AT THE SECOND ECR HARMONIC (X2) The plasma optical thickness in practical units is given by Eq. ( 5), and the fraction of absorbed power η is described by the classical expression (3).Now we should compare formulas (3)-( 5) with empirical relations ( 12) and (13). First, we compare these expressions at low density.Assuming that (14) and using the expansion of the exponent in a small argument , we obtain for T-10 and TJ-II: (15) For convenience of comparison, we rewrite formula (12), taking into account ( 13): (16) Expressions ( 15) and ( 16) distinguish both in the numerical factor by 6.7 and in parametric scalings.The classical formula (15) contains the electron temperature , which is absent in relation (16).The large difference in the numerical factor means that the slope of the straight line vs density in (15) is much steeper than in (16).Physically, this means that the area with partial absorption in case (15), although it exists, but it is very small.Apparently, for this reason, this effect was not considered in the published theoretical works. The dependence of η on the magnetic field in ( 15) is also distorted due to the presence of an increased numerical factor.If in ( 16) the critical density is determined by relation (13) ( ), then in ( 15) it has the form: (17) which is much less. In T-10 shots with density , the electron temperature varies arbitrarily in the range keV, i.e., in a factor of 1.7 [18].In accordance with (15), the experimental points along the line should be scattered over an area of such a width.However, the scatter of points is much narrower (see Fig. 12 in [19]) that indicates to independence of η on the temperature in experiment. /( / ) B dB ds Now we compare the classical formula (3) for η with the empirical formula (12) in a wide density range.We use formula (15) as the argument of the exponent.Figure 2 shows the dependence of the heating efficiency on the plasma density for the T-10 tokamak with magnetic field T. Lines with saturation correspond to theoretical formulas (3)-( 5) at temperatures and 2 keV.The dashed broken line corresponds to the empirical formula (12).Blue squares mark the values of heating efficiency for T-10 shots with various densities and heating powers.We see that at the indicated magnetic field, the discrepancies between the classical formula (3) and the empirical formula (12) are in the range of density variation (18) For example, at a density of m -3 , according to the classical formula, almost whole ECRH power is absorbed, and according to the empirical formula, only half of it. Figure 3 shows the heating efficiency versus density for the TJ-II stellarator with T, obtained with CPTM.In simulations of TJ-II shots, equivalent tokamak discharges were used.The equivalence conditions were defined in [21].According to (13), the critical density at the indicated magnetic field is 10 19 m -3 , so the discrepancy between formulas (3), (4), and ( 12) are in the range 4), ( 5) at temperatures of 1 and 2 keV.The dash broken line corresponds to the empirical formula (12).The blue squares mark the values of heating efficiency for shots with various densities and heating powers for temperatures of 1 and 2 keV [19]. Symbols mark experimental points of TJ-II I. Since the cutoff density m -3 at a T, then really all experimental points lie in the area of partial absorption (19). OVERCOMING THE AREA WITH PARTIAL ABSORPTION It was shown in [17] that existence of area with partial absorption is associated with the effect of the "temperature threshold" (insufficient electron temperature).If this threshold is exceeded, then the absorption becomes full even at densities below the critical value (condition (7)).The condition for overcoming the threshold (8) requires an increase in the electron pressure.If the condition ( 8) is not satisfied with the available ECRH power, then there are two ways to achieve full absorption.The first way is to increase the input power .Since the fraction of absorbed power remains unchanged at a constant plasma density, an increase in will also lead to an increase in the absorbed power .The W7-X stellarator team followed to this way, increasing the absorbed power up to 5 MW.At this power, the temperature threshold is overcome and full absorption is achieved.The second way is to use additional power of a different nature.Most frequently, the additional heating by neutral beam injection (NBI) is used (DIII-D [23] and ASDEX-Upgrade [24] tokamaks).The EAST tokamak uses additional heating by lower hybrid power injection [25].In many cases, only pulsed input of additional power with a duration of tens milliseconds is used.In transition to full absorption regime (8), the existing ECRH power is sufficient yet to maintain a steady state with a high electron temperature without returning to regime (9) with partial absorption. To overcome the area of partial absorption, we can also increase the major radius of the device, leading to a corresponding increase in the optical thickness, see the factor in (5), or increase the size of the resonant region.This feature was realized in the W7-X stellarator that has the major radius m ( ) . The T-15MD tokamak has m, and it will operate at magnetic fields T [26], i.e., with parameters, comparable to T-10 and TJ-II, therefore, the partial absorption is also possible there [19].For transition to the full absorption regime, it will be possible to use NBI or ion-cyclotron resonance heating [27,28].It is also possible to organize a special scenario, in which the discharge starts at a density slightly higher than the critical one with full absorption, and then to decrease density with a small remaining heating.Then the condition (8) will be satisfied and the electron temperature remains above the blue curve in Fig. 1. CONCLUSIONS An analysis of experiments in the T-10 tokamak and TJ-II stellarator has shown that at the sufficiently low density, the ECRH power at the second harmonic of extraordinary wave (X2 mode) is only partially absorbed.The empirical scaling is constructed for the fraction of the absorbed power as the function of the plasma density, magnetic field, and electron temperature.Its comparison with the classical formulas for the plasma optical thickness and for the fraction of the absorbed ECRH power has shown that the density range with partial absorption also exists in the classical description of the microwave absorption.Examples from the T-10 tokamak and the TJ-II stellarator have shown that this density range is an order of magnitude smaller than that determined by the empirical scaling.The value of the necessary increase in the electron temperature for the transition to the area of full absorption of the ECRH power at the X2 mode is estimated.4), (5) at the temperature 1 keV.The dash broken line corresponds to the empirical formula (12).The blue squares mark the values of heating efficiency for shots with various densities and heating powers [21].All experimental points lie in the area of partial absorption. Fig. 2 . Fig. 2. (Color online) Efficiency of ECRH on the density in the T-10 tokamak at the magnetic field 2.4 T. Solid curves correspond to theoretical formulas (4), (5) at temperatures of 1 and 2 keV.The dash broken line corresponds to the empirical formula(12).The blue squares mark the values of heating efficiency for shots with various densities and heating powers for temperatures of 1 and 2 keV[19]. FUNDING This work was partly supported by the Russian Science Foundation (project no.23-72-00042). Fig. 3 . Fig. 3. (Color online) Efficiency of ECRH on the density in the TJ-II stellarator at the magnetic field 1 T. Solid curve corresponds to theoretical formulas (4), (5) at the temperature 1 keV.The dash broken line corresponds to the empirical formula(12).The blue squares mark the values of heating efficiency for shots with various densities and heating powers[21].All experimental points lie in the area of partial absorption.
4,152.4
2023-08-01T00:00:00.000
[ "Physics" ]
Proximity effect of pair correlation in the inner crust of neutron stars We study proximity effect of pair correlation in the inner crust of neutron stars by means of the Skyrme-Hartree-Fock-Bogoliubov theory formulated in the coordinate space. We describe a system composed of a nuclear cluster immersed in neutron superfluid, which is confined in a spherical box. Using a density-dependent effective pairing interaction which reproduces both the pair gap of neutron matter obtained in ab initio calculations and that of finite nuclei, we analyze how the pair condensate in neutron superfluid is affected by the presence of the nuclear cluster. It is found that the proximity effect is characterized by the coherence length of neutron superfluid measured from the edge position of the nuclear cluster. The calculation predicts that the proximity effect has a strong density dependence. In the middle layers of the inner crust with baryon density $5 \times 10^{-4}$ fm$^{-3}<\rho_b<2\times 10^{-2}$ fm$^{-3}$, the proximity effect is well limited in the vicinity of the nuclear cluster, i.e. in a sufficiently smaller area than the Wigner-Seitz cell. On the contrary, the proximity effect is predicted to extend to the whole volume of the Wigner-Seitz cell in shallow layers of the inner crust with $\rho_b<2 \times 10^{-4}$ fm$^{-3}$, and in deep layers with $\rho_b>5 \times 10^{-2}$ fm$^{-3}$. Introduction The inner crust of neutron stars is an exotic inhomogeneous matter consisting of a lattice of neutron-rich nuclear clusters which is immersed in neutron superfluid [1]. One of the central issues of the physics of the inner crust is interplay between the superfluidity and the inhomogeneity, which influences various properties of the inner crust such as the specific heat, the thermal conductivity, and the pinning and unpinning of vortices. These are essential factors to understand astrophysical issues, such as the cooling and the glitch phenomenon of the neutron stars. Microscopic many-body approaches to these phenomena have been pursued in the framework of the Hartree-Fock-Bogoliubov (HFB) theory, which has a capability to describe microscopically the inhomogeneous pair-correlated system. It has been argued for instance that the presence of the nuclear cluster modifies the quasiparticle excitation spectrum and the average pair gap, leading to a sizable difference in the specific heat of the inner crust from that of the uniform neutron superfluid [2][3][4][5][6]. evaluate the pinning energy of superfluid vortices [7][8][9]. Recent interest also concerns with a dynamical aspect of the issues, i.e. the interaction between the vibrational motion of the nuclear cluster and the phonon excitation (the Anderson-Bogoliubov collective mode) of the neutron superfluid. This is one of the key ingredients which influence the thermal conductivity of the inner crust in magnetars [10][11][12][13][14]. In an attempt to analyze this dynamical coupling from a microscopic viewpoint, we have investigated the collective excitation of the inner crust matter by means of the quasiparticle random phase approximation based on the HFB theory [15,16]. We found that the dynamical coupling between the collective motions of the nuclear cluster and of the neutron superfluid is weak. In the present study, we intend to reveal the interplay between the nuclear cluster and the neutron superfluid but from a different viewpoint, i.e. the proximity effect of the pairing correlation [17,18]. The proximity effect is a general phenomenon which emerges around a border region of the system of a superconducting/superfluid matter in contact with normal matter (or matter with different pairing property). The pairing correlations in both matter are affected mutually in the border region since the Cooper pairs penetrate the border. The proximity effect in the inner crust matter is discussed in a few preceding works [2,3,27] , but only in a qualitative manner. In the present study, we aim at characterizing the proximity effect quantitatively in order to reveal basic features of the pair correlation arising from the inhomogeneous structure of the inner crust matter. As a theoretical framework to perform this study, we adopt the HFB theory using the Skyrme functional with a implementation of a few new features. One of the key elements in the HFB approach is the effective pairing interaction or the effective pairing functional, which generates the pair correlation in the system under study, and a density-dependent contact force, called the density-dependent delta interaction (DDDI), is often adopted. Note however that the inner crust matter consists of the neutron superfluid, whose density varies in a wide range from zero to that of the nuclear saturation, and the nuclear clusters, which resemble to isolated neutron-rich nuclei. In order to take into account this feature, we prepare a new parameter set of DDDI, which is required to describe the pairing gap of neutron superfluid obtained in ab initio calculations [28,29] as well as the experimental pairing gap in finite nuclei. Secondly, we quantify the range of the proximity effect by identifying the distance where the presence of the nuclear cluster influences the pairing property in neutron superfluid. Using this measure, we discuss in detail the dependence of the proximity effect on the density of the neutron superfluid, and clarify how large the proximity effect is in different layers of the inner crust. In Section 2, we explain the adopted Skyrme-HFB model and the new parameter set of DDDI. In the present HFB all the nucleons are described as quasiparticles confined in a spherical box. If we adopt the box size equal to the Wigner-Seitz radius of the lattice cell, it is the same as the Wigner-Seitz approximation often adopted in the preceding works. However, the box truncation causes so called finite-size effect, and it make difficult to analyze the proximity effect. In Section 3, we examine the finite-size effect, and propose a different setting of the analysis using a large box truncation in place of the Wigner-Seitz approximation. Section 4 is devoted to a systematic analysis of the proximity effect. In subsection 4.1 we describe our scheme of the analysis that quantifies the range of the proximity effect, and justify the scheme with a systematic variation of the density of neutron superfluid immersing the nuclear cluster. In subsection 4.2, we apply the same analysis to various layers of a realistic configuration of the inner crust of neutron stars. Section 5 is devoted to the conclusions. Skyrme-Hartree-Fock-Bogolibov method in a spherical box We adopt the Skyrme-Hartree-Fock-Bogoliubov method to describe the inner crust matter. Since the method is an extension of that is used in Refs. [15,16], we describe it briefly with emphasis on new aspects which are introduced in the present study. We solve the HFB equation in a spherical box using the radial coordinate space and the partial wave expansion. The zero temperature is assumed and the spherical symmetry of solutions is imposed. Electrons are neglected. The radial HFB equation for a given angular quantum numbers lj reads where φ qlj 1 , φ qlj 2 is the quasiparticle wave function. Index q denotes neutron or proton. We discretize the radial coordinate with an interval h = 0.2 fm as r i = i * h − h/2 = h/2, 3h/2, · · · (i = 1, · · · , N ) up to the edge r = R box of the box, and use the nine-point formula to represent the derivatives in the Hartree-Fock Hamiltonian h qlj (r). We impose the Dirichlet-Neumann boundary condition [30], with which even-parity wave functions vanish at the edge of the box and the first derivatives of odd-parity wave functions vanish at the same position. Equation (1) is represented as a matrix eigenvalue problem where the wave function at the discretized coordinates φ qlj T is a 2N -dimensional vector. We use routine DSYEVX in the LAPACK package to solve the eigenvalue problem for the symmetric matrix. If we treat the lattice configuration of the nuclear clusters by means of the Wigner-Seitz approximation, the box radius R box is chosen to be the size of the Wigner-Seitz cell. We shall also choose larger boxes R box = 100 fm or 200 fm, as we explain below. All the quasiparticle states up to a maximal quasiparticle energy E max = 60 MeV are included to calculate the number density, the pair density and all the quantities needed to calculate the selfconsistent potentials. We put also a cut-off l max on the angular momenta of the partial waves so that l max > E max /( 2 /2m)R box : l max = 200 for R box = 100 fm, and l max = 400 for R box = 200 fm, for example. We use the parameter set SLy4 [19] for the selfconsistent Hartree-Fock potential in h q (r). We adopt the densitydependent delta interaction, as described below, to derive the pair potential ∆ q (r). We vary the neutron Fermi energy λ n to control the neutron density and we determine the proton Fermi energy λ p to fix the proton number Z of the nuclear cluster. The other details are the same as in the previous study [15,16]. Density-dependent pairing interaction As the pairing interaction, we use a density-dependent delta-interaction (DDDI), given as v pair,n ( r 1 , for neutrons. Here V n [ρ n ( r), ρ p ( r)] is the density-dependent interaction strength, and (1 − P σ )/2 is the projection operator for the spin singlet channel. The pair potential is then Table 1 DDDI parameters adopted in the present study. For the definition, see Eq. (5) and the text. The parameters are appropriate for the cut-off energy e cut = 60 MeV. We consider the following three models for the interaction strength V n [ρ n ( r), ρ p ( r)]. The first one, which we introduced in Refs. [20,21], is given as with ρ 0 = 0.08 fm −3 . Here the overall constant V 0 = −458.4 MeV fm 3 is determined to reproduce the 1 S 0 scattering length a=-18.5 fm in free space (i.e. at zero density) under the single-particle cut-off energy e cut = 60 MeV. The dependence of the interaction strength V n [ρ n ] on the neutron density ρ n is determined so that it reproduces the neutron pairing gap in pure neutron matter which is obtained in the BCS approximation using a bare nuclear force [20,21]. We denote the parameterization, Eq. (4), as "DDDI-b" since it refers to the BCS gap with the bare nuclear force. (It is the same as the parametrization DDDI-G3RS in Ref. [20].) In the present study we introduce more realistic modeling of the neutron pairing appropriate to the inner crust matter. Here we consider parametrizations of the DDDI that provide realistic pairing gap both in neutron matter and in finite nuclei. Concerning the neutron matter, it is known that the pairing gap is affected by medium effects beyond the BCS approximation, and many of theoretical studies trying to evaluate the medium effects predict a significant reduction from the BCS gap while the predicted values spread in a wide range [22][23][24][25]. Nevertheless, the pairing gap in the low-density limit is believed to be described reliably by a perturbative approach to the screening effect, discussed first by Gor'kov and Melik-Barkhudarov (GMB) [26], and the pairing gap ∆ GMB in the GMB framework gives a reduction of a factor of (4e) 1/3 2.2 from the BCS pairing gap [23,31,32]. Recently, numerical ab initio calculations based on Monte-Carlo methods have been performed for pure neutron matter in low density region ρ n 10 −5 − 10 −2 fm −3 , and the predicted pairing gaps are reduced from the BCS gap by a factor of 1.5 -2 [24,28,29,33]. We can refer to these studies in requiring a new parametrization of the DDDI. It is also known that the pairing gap in finite nuclei cannot be described well by the BCS approximation applied to the bare nuclear force, and there is no ab initio evaluation of the gap in finite nuclei. Instead we will refer to experimental information on the pairing gap in finite nuclei. In order to satisfy these conditions we introduce the following extended form of the density-dependent interaction strength: The first term is introduced to describe the GMB gap appropriate to the low-density limit of the pure neutron matter. As discussed in Appendix A, the force strength V GMB of the contact force which reproduces the GMB pairing gap ∆ GMB depends on the neutron Fermi momentum k F,n or the density ρ n of neutron matter. The dependence is expressed as a linear term proportional to k F,n or ρ 1/3 n if it is expanded in powers of k F . Requiring that the GMB pairing gap is reproduced by the DDDI in the low-density limit ρ n → 0, k F,n → 0. The second and third terms are introduced to represent the pairing gap of neutron matter at finite density and that in finite nuclei. In particular, the second term together with the first term is relevant to the pairing gap in neutron matter, and we assume that the second term has a power α 2 = 2/3, i.e., ∝ ρ 2/3 n ∝ k 2 F,n the second power of neutron Fermi momentum k F,n . We then require that the coefficient η 2 of this term is consistent with the ab initio pairing gap of neutron matter obtained for 10 −5 fm −3 ρ n 10 −2 fm −3 in the quantum Monte Carlo calculation by Gezerlis and Carlson [28] and the determinantal lattice Monte Carlo calculation by Abe and Seki [29]. Note however that this requitement alone does not fix uniquely the coefficient η 2 since these ab initio calculations are slightly different with each other and there is no ab initio results for moderately low densities 10 −2 fm −3 ρ n The third term dependent on the proton density represents a part of medium effects associated with systems with a proton fraction. For simplicity we assume that it is proportional to the proton Fermi momentum k F,p or the proton density ρ 1/3 p . 1 We use both the coefficient η 1 of this term and the uncertainty in η 2 to describe the pairing gap in finite nuclei. In practice, we require that the average neutron pairing gap ∆ n,uv = ∆ n (r)ρ(r)d r/ ρ(r)d r in 120 Sn obtained from our HFB model reproduces the experimental neutron gap ∆ n,exp 1.3 MeV, extracted from the 3-point odd-even mass difference [34]. 1 A perturbative estimate of the medium effect in symmetric matter gives an attractive induced interaction proportional to N 0,p ∝ k F,p [31]. In the present study we prepare two different parameter sets to represent the remaining uncertainty of the neutron pair gap. In one case (we call "DDDI-a1" below), we choose η 2 = 0.06 and η 1 = 0 so that the neutron pairing gap in 120 Sn is reproduced without η 1 . In this case, the pairing gap of neutron matter is close to that of Abe and Seki [29], and the neutron matter pairing gap at moderately low density is rather large ∆ ∼ 1 − 2 MeV, as shown in Fig. 1. It is remarked that the medium effect associated with the nuclear cluster or finite nuclei is effectively included in η 2 . In another parameter set ("DDDI-a2"), we consider a case that the neutron matter pairing gap at moderately low density is relatively small; we determine η 2 = 0.255 so as to make the neutron matter pairing gap vanish at ρ n = ρ 0 as the BCS gap does. The parameter η 1 = −0.195 is then determined to reproduce the neutron gap in 120 Sn. (Note that the neutron matter pairing gap reproduces approximately the result of Gezerlis and Carlson [28], as shown in Fig. 3.) The parameter sets of the three DDDI models are summarized in Table 1. the gap obtained with the ab initio calculations. The DDDI-a2 reproduces approximately the result of Gezerlis and Carlson [28] for the density range ρ n = 10 −5 − 10 −2 fm −3 . The gap of the DDDI-a2 at moderate density is small ∆ < 1.3 MeV, and vanishes at ρ n ∼ 0.08 fm (k F ∼ 1.4 fm −1 ) corresponding to neutrons in the saturated nuclear matter. The neutron gap of the DDDI-a1 is very close to that of the DDDI-a2 up to ρ n < ∼ 10 −3 fm −3 , but deviate from it above ρ n > ∼ 10 −3 fm −3 , It is rather close to the gap of Abe and Seki [29], and the neutron matter pairing gap at moderately low density is rather large ∆ ∼ 1 − 2 MeV. The parameter set DDDI-b gives a larger pairing gap at low and moderately low densities than DDDI-a1 and DDDI-a2, while at densities around the saturation the gap becomes small and almost vanishing. 2 We consider that DDDI-a1 and DDDI-a2 are more realistic than DDDI-b while the difference between DDDI-a1 and DDDI-a2 represents the uncertainty in modeling the realistic pairing correlation. We also use the model DDDI-b since it simulates the BCS gap, which is a robust baseline common to all the models of realistic bare nuclear force [22]. Figure 2 shows the coherence length ξ of superfluid uniform neutron matter, calculated as described in Appendix B. The coherence length ξ depends strongly on the neutron density. The coherence length ξ is as short as ξ < ∼ 10 fm at ρ n = 10 −3 − 2 × 10 −2 fm −3 . The coherence length becomes long gradually as the neutron density decreases less than 10 −3 fm −3 , and The neutron pair gap in Sn isotopes obtained with the Skyrme-Hartree-Fock-Bogoliubov method using the three DDDI pairing interaction models. Solid, dashed and dot-dashed curves correspond to DDDI-b, DDDI-a1 and DDDI-a2, respectively. The Skyrme parameter SLy4 is adopted. The open circle is the experimental neutron pair gap derived using the odd-even mass difference [34] and AME2016 [39]. See text for details. it also does rather sharply for increasing ρ n more than ∼ 3 × 10 −2 fm −3 . The minimum value of the coherence length is ξ ∼ 3.6 fm for DDDI-b at neutron density corresponding to λ n ≈ 5 MeV, ξ ∼ 4.6 fm for DDDI-a1 at λ n ≈ 6 MeV, and ξ ∼ 6.1 fm for DDDI-a2 at λ n ≈ 5 MeV. The dotted curve in Fig. 2 shows the average inter-neutron distance d = ρ −1/3 n . It is noted that the coherence length ξ is shorter than the average inter-neutron distance and DDDI-a2. The coherence length shorter than d implies that the pair correlation at these densities is in the domain of the strong-coupling pairing, characterized as the BCS-BEC crossover phenomenon [23]. Finite-size effect and large-box configuration Since the present HFB calculation is performed in the radial coordinate space truncated with a finite box radius R box , obtained results depend on the box radius R box especially when R box is not large. This kind of dependence is often called the finite-size effect. If we adopt the Wigner-Seitz approximation, where the box size is chosen equal to the Wigner-Seitz radius R cell of the lattice cell of the inner crust, results also include the finite-size effect. We shall examine how the Wigner-Seitz approximation is affected by the finite size effect. For this purpose, we here describe pure neutron matter using the same HFB code. For pure neutron matter, we can obtain an accurate numerical result by means of the uniform-BCS calculation, which corresponds to the limit of infinite size R box → ∞. Comparison with the uniform-BCS result makes it possible to evaluate the finite size effect. We have applied the present HFB model to the pure neutron systems by simply neglecting the proton contributions. Figure 4 shows a few example of the results, in which the neutron Fermi energy is chosen as λ n = 7.2, 2.9 and 0.2 MeV corresponding to cells 2, 5 and 10 in Table 2, and the neutron density 1.8 × 10 −2 , 3.0 × 10 −3 and 3.0 × 10 −4 fm −3 , respectively. The pairing interaction DDDI-a1 is adopted. Dashed curves are the results for the calculation in which the box radius R box is set to the radius R cell = 28, 39 and 54 fm of the corresponding Wigner-Seitz cells. It is seen that both the number density and the pair density of neutrons deviate from the uniform-BCS results; the finite size effect in the pair density is not negligible and much larger than that for the number density. The deviation from the uniform-BCS result (horizontal lines) is more than 20% in cell 10 although it is less than about 5% in the other cells 2 and 5. The boundary condition with the finite box causes discretization of the energy spectrum of the quasiparticle states, and the pairing property is influenced by the discretization if the pair gap is not large enough than the energy spacing. It is also seen that the deviation from the uniform-BCS is worse at positions close to the origin than at far positions. A possible explanation is that the influence of the discretization of the quasiparticle energy spectrum may be stronger at small r than at larger r; the number of contributing quasiparticle states is effectively small since the wave function of high-partial waves is suppressed at small r. The above results indicate that the Wigner-Seitz approximation to the inner crust matter may not be accurate enough to discuss the proximity effect. One needs to control the finite size effect in a better way. A desirable approach may be to take into account the lattice structure of the inner crust matter using the band theory method and the Bloch waves, where the continuity of the neutron quasiparticle spectrum is kept. However the band theory applied to the HFB calculation is presently quite limited [2], and a calculation with a large quasiparticle space is too demanding and difficult to be performed. Instead we adopt a simpler approach where a nuclear cluster is placed in a neutron superfluid confined in a large box, where the box size is chosen sufficiently large in order to reduce the finite-size effect as much as possible. We find that R box > ∼ 100 fm gives the pair density convergent to the uniform-BCS with accuracy of around 1% for densities ρ n > ∼ 1 × 10 −4 fm −3 as shown in Figure 2(a)(b), where we plot the results obtained with R box = 100 fm. In very-low-density cases ρ n < ∼ 1 × 10 −5 fm −3 , the pairing gap becomes very small ∆ < ∼ 0.01 MeV. In this case, influence of the discretization in quasiparticle levels is less negligible, and hence a larger box is required. For cell 10 ( Fig. 2(c)), we obtained the agreement to the required accuracy with R box = 200 fm. In the following we adopt this large-box configuration to discuss the proximity effect associated with the presence of the nuclear cluster. Proximity effect We shall now discuss the pair correlation in the inner crust matter. As discussed above we consider the system confined in a large box, at the center of which a nuclear cluster is placed. Using this setup, we shall investigate how the presence of the nuclear cluster influences the pair correlation of neutron superfluid in the neighborhood region around the cluster. length of the proximity effect In order to investigate general features of the proximity effect, we shall first examine cases where the density of the surrounding neutron superfluid is systematically varied while the proton number is fixed. In the next subsection we discuss realistic configurations of the inner crust matter, for which the proton number and the density of neutron superfluid are chosen to represent various layers of the inner crust. The proton number is Z = 28 in all the examples in this subsection and we vary the neutron Fermi energy λ n systematically from 0.2 MeV to 6 MeV, which corresponds to the density of the uniform neutron superfluid from ρ n = 4 × 10 −5 fm −3 to 1 × 10 −2 fm −3 . A typical result obtained for λ n = 4 MeV (ρ n = 6.1 × 10 −3 fm −3 ) with DDDI-b is shown in Fig. 5, where plotted are the number densities of neutrons and protons, ρ n (r) and ρ p (r), and the neutron pair densityρ n (r) as a function of the radial coordinate r. It is seen that the nuclear cluster is well localized in a central region as seen in the profile of the neutron density ρ n (r) which converges rather quickly to a constant value at around r ≈ 8 fm (the proton density ρ p (r) converges to zero around r ≈ 6 fm. ). The surface of the nuclear cluster may be quantified by fitting to the neutron density with a function of the Woods-Saxon type, where R s defines the half-density surface, and a represents the diffuseness of the surface. The constant ρ n,M is the neutron density obtained from the uniform-BCS performed for the same value of λ n . The values of f 0 , R s and a are extracted from a fitting. In addition we find it useful to consider "the edge" of the nuclear cluster to evaluate the area where the cluster exists. We define the nuclear edge by R edge = R s + 4a. The edge position r = R edge is indicated by the black circle in Fig. 5, and it is seen that R edge represents well the position where the neutron density ρ n (r) converges to ρ n,M . A most noticeable feature in Fig. 5 is that the neutron pair densityρ n (r) exhibits behaviours different from those of the neutron number density ρ n (r). It is seen that the neutron pair densityρ n (r) slowly converges and reaches the uniform-BCS value at around r ≈ 12 fm, deviating from R edge by about 4 fm. In other words the influence of the nuclear cluster extends to the neighbour region beyond R edge . This slow convergence is nothing but the proximity effect. In this example the neutron pair density inside the cluster is significantly smaller than that outside the cluster. This reflects the characteristic density dependence of the neutron pair gap of the DDDI-b model; the gap for the density inside the cluster (ρ n ∼ ρ 0 ) is very small ∆ < ∼ 0.1 MeV whereas that for the density of neutron superfluid (ρ n,M ∼ 6.1 × 10 −3 fm) is relatively large ∆ ∼ 2.1 MeV. It has been argued that the proximity effect emerges in a region adjacent to the border with its length scale characterized by the coherence length ξ of the superfluid/superconducting matter [17]. We here assume that the border between the neutron superfluid and the nuclear cluster is approximated by the edge radius R edge , rather than the half-density surface R s . If these considerations are reasonable, it is expected that the proximity effect is seen up to r ≈ R edge + ξ. In the case shown in Fig. 5, the position where the neutron pair density converges to the uniform-BCS value corresponds well to r = R edge + ξ = 8.27 fm+3.63 fm = 11.9 fm, and the above argument appears to hold. The proximity effect is clearly visible in all the cases; the pair density converges to that of the uniform neutron superfluid at a position deviating significantly from the edge position r = R edge of the nuclear cluster. It is also seen that the range of the proximity effect depends rather strongly on the neutron Fermi energy or the density of the neutron superfluid, especially at low neutron density ρ n,M < ∼ 5 × 10 −4 fm −3 and λ n < ∼ 1.0 MeV. It also depends on the three DDDI models. Despite the differences in the pairing properties, we confirm here that the range where the proximity effect reaches is described well by the position r = R edge + ξ (marked with the square symbol), characterized by the coherence length ξ measured from the edge R edge of the nuclear cluster. (Note that the edge position R edge of the nuclear cluster depends only weakly on the neutron Fermi energy, and there is essentially no dependence on the three choices of the pairing interaction. ) We here recall Fig. 2 where the coherence length is shown to become as small as < ∼ 10 fm at moderately low density ρ n = 7 × 10 −4 − 2 × 10 −2 fm −3 for the three DDDI's. This brings about the short range of the proximity effect seen for λ n = 2 − 6 MeV. This is related to the specific feature of the dilute neutron superfluid that the BCS-BEC crossover is about to occur at these densities. A long range of the proximity effect seen for λ n = 0.2 − 1.0 MeV can be related to the monotonic and considerable increase of the coherence length ξ with decreasing neutron density for very low density ρ n < ∼ 10 −3 fm −3 . Note that for ρ n ∼ 10 −5 − 10 −4 fm −3 , the coherence length ξ = 5 − 20 fm in the case of DDDI-b, ξ = 10 − 36 fm for DDDI-a1 and ξ = 11 − 37 fm for DDDI-a2. If the density of the external neutron superfluid decreases further, the range of the proximity effect is expected to exceed far beyond 50 fm. Realistic inner crust configurations Finally, we discuss the proximity effect for realistic situations of the inner crust of neutron stars. Here we refer to the Wigner-Seitz cells obtained in Negele and Vauthrin [30] for various layers of the inner crust. We perform the HFB calculation for the cells listed in Table 2 using the large-box configuration. The proton number Z and the Wigner-Seitz radius R cell of each cell is taken from Ref. [30]. The neutron Fermi energy λ n , the control parameter of the neutron density, is chosen so that the obtained density of the external neutron superfluid reproduces approximately the density of the neutron gas in Ref. [30]. For simplicity we use a common value of λ n for the three DDDI models. The box size is R box = 100 fm for most cells and 200 fm only for cell 1 with DDDI-b2, cell 10 with DDDI-a1 and cell 10 with DDDI-a2. The calculated neutron pair density is shown in Fig. 8. The maximum of the plotted radial coordinate is the Wigner-Seitz radius R cell for each cell. A noticeable feature is that in cells 3 to 8 the pair density converges to that of the uniform-BCS at a distance shorter than Table 2 The proton number Z and the neutron Fermi energy λ n employed in the present calculation to represent realistic configurations of the inner crust cells [30]. The next columns are the density ρ n,M , the pairing gap ∆ and the coherence length ξ, obtained from the uniform-BCS calculation for the corresponding neutron matter. Results of the three DDDI models, DDDI-b, DDDI-a1, and DDDI-a2, are listed for ∆ and ξ while ρ n,M is shown only for DDDI-a1. The third last column is the edge radius R edge of the nuclear cluster extracted from the Hartree-Fock-Bogoliubov calculation with DDDI-a1. The second last is the Wigner-Seitz radius R cell of the cells [30] while the last is the average baryon density ρ b = R cell 0 (ρ n (r) + ρ p (r))r 2 dr/(R 3 cell /3) (with DDDI-a1) evaluated using the same Wigner-Seitz radius. See text for details. the half distance of the Wigner-Seitz radius. In other words the proximity effect is restricted only in a small area nearby the nuclear cluster. The area of uniform neutron superfluid and that of the nuclear cluster are well separated in these middle layers of the inner crust. This feature is common to the three DDDI pairing models. It is noted that the coherence length of external neutron superfluid is the smallest ξ ≈ 4 − 6 fm at cells 2-6 (for DDDI-b), cells 2-5 (for DDDI-a1), and cell 3 (for DDDI-a2), which are significantly smaller than the Wigner-Seitz radius of these cells at the middle layers. In cells 9 and 10, where the external neutron superfluid is dilute (ρ n,M < ∼ 1 × 10 −4 fm −3 ), the proximity effect extends to a major area of the Wigner-Seitz cell, beyond the half length of the Wigner-Seitz radius, especially for DDDI-a1 and DDDI-a2. This reflects the long coherence length at such very low densities: ξ > ∼ 20 − 40 fm for DDDI-a1 and DDDI-a2, and ξ > ∼ 12 − 20 fm for DDDI-b. Note that the pairing gap of DDDI-a1 and DDDI-a2 in dilute neutron matter is reduced from the BCS value (corresponding to DDDI-b) by a factor of about 2, leading to a longer coherence length in these realistic gap models DDDI-a1 and DDDI-a2. Table 2. Results of the three gap models, DDDI-b, DDDI-a1 and DDDI-a2, are shown. The horizontal axis is the baryon density ρ b of the cells. The Wigner-Seitz radius R cell of the cells, taken from Ref. [30], is also plotted for comparison. Another case where a long-range proximity effect is predicted is cell 1 at relatively high density, where the external neutron density ρ n,M ∼ 0.04 fm −3 ≈ ρ 0 /2 is about a half of that of the saturated nuclear matter. The pair density deviates from that of the uniform neutron superfluid in the whole area of the Wigner-Seitz cell. In this cell with relatively high neutron density the predicted coherence length ξ varies from 7 to 30 fm depending rather strongly on the pairing models, reflecting the uncertainty of the gap at such density. However, because of the relatively high baryon density and a large N/Z ratio, the Wigner-Seitz radius R cell becomes small (∼ 20 fm) and the edge position r = R edge of the nuclear cluster becomes as large as ∼ 13 fm due to a thick neutron skin of the cluster. Consequently the range R edge + ξ of the proximity effect exceeds the Wigner-Seitz radius irrespective of the uncertainty of the pairing gap. Note that cell 1 corresponds to a deep layer of the inner crust, where a transition to the so called pasta phase is about to occur. The present result suggests strong proximity effect also for the pasta phase at higher baryon density. We remark also that the proximity effect in these deep layers might be even stronger than the present prediction because of the presence of adjacent nuclear clusters in the lattice configuration, but a quantitative evaluation is beyond the scope of the present study. Conclusion We have studied in detail the proximity effect of neutron pair correlation in the inner crust of neutron stars by applying the Skyrme-Hartree-Fock-Bogoliubov theory formulated in the coordinate representation. We describe a many-nucleon system consisting of Z protons (which form a nuclear cluster) and neutrons with a given positive Fermi energy, confined in a spherical box. If we choose the box radius R box equal to the Wigner-Seitz radius of the lattice cell, the calculation corresponds to the Wigner-Seitz approximation often adopted in preceding studies. We found however that for the realistic Wigner-Seitz radius R cell ∼ 20 − 50 fm of the inner crust matter, influence of the box truncation or the finite-size effect is not negligible for quantitative analysis of the proximity effect. We therefore use a large-box configuration where the box size is chosen sufficiently large R box ≥ 100 fm. In other words, we considered a simplified model of the inner crust matter in which a single nuclear cluster is immersed in a uniform neutron superfluid, prepared in a sufficiently large box. As the effective interaction causing the pairing correlation, we introduced new parameterizations of the density-dependent delta interaction (DDDI-a1 and DDDI-a2) so that they reproduce the ab initio evaluations of the pair gap in low-density neutron matter as well as the experimental pair gap in finite nuclei. Focusing on the neutron pair densityρ n (r) (i.e. a locally defined pair condensate), we have examined howρ n (r) is affected by the presence of the nuclear cluster and how this quantity around the cluster converges to the limiting value of the immersing neutron superfluid. It is found from a systematic analysis that range of the proximity effect is characterized by the coherence length of neutron superfluid measured from the edge position of the cluster. Applying the above result to the realistic configurations of the inner crust, we predict that the proximity effect is well limited in the vicinity of the nuclear cluster, i.e. in a sufficiently smaller area than the Wigner-Seitz cell in the middle layers of the inner crust with On the contrary, the proximity effect is predicted to extend to the whole volume of the Wigner-Seitz cell in the shallow layers of the inner crust with ρ b < ∼ 2 × 10 −4 fm −3 . Another region where the range of the proximity effect is expected to cover the whole Wigner-Seitz cell is deep layers of the inner crust with ρ b > ∼ 5 × 10 −2 fm −3 , where the Wigner-Seitz radius becomes small R cell < ∼ 20 fm while the coherence length may becomes comparable or larger than R cell . This observation indicates that in these layers there is no clear separation between the nuclear cluster and the immersing neutron superfluid as far as the pairing correlation is concerned. It implies that the phenomena originating from the pair correlation and superfluidity, such as the vortex pinning and the superfluid phonon excitations may also be affected by the proximity effect. It is noted also that theoretical approaches taking into account the lattice configuration is preferred for such cases. It is a subject to be pursued in future study. Appendix A: Effective contact interaction for the GMB gap Here we discuss the parameter set of DDDI which reproduces the pairing gap of Gor'kov Melik-Barkuhudarov (GMB) in the dilute limit of neutron matter. This is introduced by combining the known arguments on the GMB pairing gap [31,32] and on the effective strength of the contact interaction [35,36]. Let us first outline the relation between the strength of the contact interaction and the pairing gap in the BCS approximation. For the pairing interaction of the contact two-body force v( r 1 − r 2 ) = V 0 δ( r 1 − r 2 ) , the gap equation in the weak-coupling BCS approximation reads where e k = 2 k 2 2m , λ = e F = 2 k 2 F 2m , and ∆ is the single-particle energy, the Fermi energy (with the Fermi momentum k F ) and the pairing gap, respectively. To avoid the divergence inherent to the contact interaction, the sum k ≡ 1 (2π) 3 k cut 0 4πk 2 dk is performed with a cut-off momentum k c or a cut-off single-particle energy e cut = 2 k 2 cut /2m. The force strength v 0 can be chosen so that the same interaction reproduces the zero-energy T-matrix T 0 = 4π 2 a m , and the scattering length a of the nucleon scattering in the 1 S 0 channel. This requirement is expressed in terms of the Lippmann-Schwinger equation for the T-matrix, which can be written as which determines the force strength V 0 as [35,36] The gap equation (7) combined with the T-matrix equation (8) is written as The gap equation (10) is known to be solved analytically in the low-density limit k F → 0 satisfying k F |a| 1 and k F k c [37,38]. The right hand side of Eq. (10) is evaluated as N 0 log e 2 ∆ 8e F , where N 0 = mk F 2π 2 2 is the single-particle level density at the Fermi energy. The paring gap in this limit is then given [23,32,37,38] as Note that the T-matrix T 0 plays a role of a renormalized interaction strength of the contact force. It is known that the medium effect in the low-density limit can be evaluated perturbatively as originally discussed by Gor'kov and Melik-Barkhudarov [26]. The effect is represented as an induced interaction [31,32] U ind = N 0 T 2 0 (1 + 2 log 2)/3 which modifies the interaction strength T 0 → T 0 + U ind , where the numerical factor (1 + 2 log 2)/3 arises from an average of the Lindhard function. Similarly the left hand side of the gap equation (10) is and hence the GMB pairing gap ∆ GMB valid in the low-density limit is given as with a reduction of a factor of 1/2.2 from the BCS gap. Now, by combining the argument on the contact force, Eq. (8), and on the induced interaction modifying the l.h.s of the gap equation, Eq. (12), we find that an effective strength V GMB of the contact force which reproduces the GMB pairing gap is given by which determines V GMB as We note that the force strength V GMB depends on the Fermi momentum k F . Expanded in powers of k F , relevant to the low-density limit k F → 0 is the linear term in k F . It can be expressed also in terms of the density ρ = k 3 F /3π 2 as with η = 1 + 2 log 2 3 For a given value of the neutron Fermi energy λ n , we numerically solve the coupled equations ∆(λ n ) = − V n [ρ n ] 4π 2 E(k) = (e(k) − λ n ) 2 + ∆ 2 , e(k) = 2 k 2 2m * n (ρ n ) ρ n (λ n ) = 1 2π 2 where U n (ρ n ) and m * n (ρ n ) are the Hartree-Fock potential and the effective mass of neutrons, obtained from the SLy4 functional. 3 The cut-off momentum k c is determined by e(k c ) − λ = E cut so that it corresponds to the cut-off energy in the coordinate-space HFB calculation. The above scheme is called the uniform-BCS calculation in this paper. The coherence length ξ can be calculated by evaluating the size of the Cooper pair and is given ξ = r 2 (21)
10,021.2
2020-03-27T00:00:00.000
[ "Physics" ]
FrameNet-assisted Noun Compound Interpretation Given a noun compound (NC), we address the problem of predicting the appropriate semantic label linking the constituents of the NC. This problem is called Noun Compound Interpretation (NCI) . We use FrameNet as a semantic label repository. For example, given the noun compound ( board approval ), we predict the frame ( D ENY OR GRANT PERMISSION , as per FrameNet) as appropriate and the semantic role of the modifier word ( A UTHORITY ) as the semantic label linking board and approval ; the resulting label is D ENY OR GRANT PERMISSION : A UTHORITY . Our semantic label repository is very large ( ≈ 11k labels) compared to the NC data available for training (approx 1900). Thus, learning in this case, especially for unseen semantic labels, is hard. We propose to solve this problem by predicting semantic labels in a continuous label embedding space , which is novel. This embedding space is created by learning label embeddings using the FrameNet data. The embeddings are then used to train two separate models – one for predicting Frames and the other for FEs. As the label embedding space captures the semantics of the labels, using these embeddings enables generalizing well on unseen labels, thus achieving zero-shot learning. Our preliminary investigations show that the proposed approach performs well for unseen labels, achieving 5% and 2% points improvements over baselines for the frame and FE prediction, respectively. The study shows the promise of the use of continuous space embeddings for noun compound interpretation and points to the need for further investigation. Introduction A noun compound is a sequence of two or more nouns that act as a single entity with well-defined meaning (e.g., paper submission, colon cancer, etc.). Semantic relations between the component nouns are implicit. For instance, the information that 'it is a juice made from orange' is hidden in orange juice. Uncovering this semantic relation is called the problem of Noun Compound Interpretation (NCI). NCI needs ML, as the task faces the challenge of ambiguity, and disambiguation by rules is well nigh impossible because of multifarious complex underlying language phenomena. The proposition of storing NCs and doing table lookup for interpretation is also impractical due to a large number of NCs and the challenge of high productivity (new nouns and NCs get created frequently, e.g., corona vaccine is a relatively new NC). Often, the exact relation, sentiment, etc. are also governed by contextual pragmatics. For instance, the sentiment towards tax money depends on who the beneficiary is, which again depends on the predicate. The predicate give could indicate negative sentiment (for the tax-payer), whereas the predicate receive would indicate positive sentiment (for the government). Due to such instances, NLP tasks such as machine translation (Baldwin and Tanaka, 2004;Balyan and Chatterjee, 2015), textual entailment (Nakov, 2013), question answering (Ahn et al., 2005), etc. suffer when they encounter noun compounds. For example, from the below textquestion pairs, a system would need to interpret the underlying semantics within the compound, to answer the question correctly. (a) "student protest": "who is protesting?", (b) "fee-hike protest": "why protest?", and (c) "university protest": "where is the protest?" In this work, we interpret only compositional noun-noun compounds. A noun-noun compound is categorised as compositional, if the meaning of the compound can be composed from the semantics of the individual noun units present. From a relation representation perspective, noun compounds are interpreted in two ways: via labelling and paraphrasing. Labelling involves assigning an abstract semantic relation from a predefined set, for example, orange juice: MADEOF, hillside home: LOCATION, etc. There are many inventories of predefined semantic relations. We use the FrametNet based labels proposed by Ponkiya et al. (2018a). As per their convention, the head noun of a compound invokes the frame, and the modifier noun fits in one of the frame elements of the invoked frame, vide 'board approval' in the abstract. There are more than 11,000 FEs in FrameNet, and we have about 1900 training examples. Thus, the average number of examples for each label is quite small, and many labels do not have a training example. In summary, the contributions of this paper are three-fold: 1. We embed FrameNet entities in a continuous space, perform prediction in the continuous space to generalize over unseen labels, and show performance improvement on the unseen labels. 2. We create a noun-compound annotation tool that assists annotators in providing manual labels, and we release it publicly. 3. Using the above tool, we extend the dataset released by Ponkiya et al. (2018a) with 326 more manually-annotated gold samples, and release it for further research. The rest of the paper is organized as follow: Section 2 discuss related work, Section 3 gives an overview of foundations for the work. Section 4 details our approach. Section 5 provides experimental details: the dataset used and training/testing setup. Section 6 discusses the results and analysis, followed by a conclusion and future work. The code, dataset and the tool can be downloaded from http://www.cfilt.iitb.ac.in/nc-dataset. Related Work A relation between the components of a noun compound (say, chocolate cake) can be represented in one of the following two ways: (1) assigning a relation from a predefined set of semantic relations (MADEOF), or (2) using a paraphrase to convey the underlying semantic relation ("cake made using chocolates" or "cake with chocolate flavor"). Noun-compound (NC) interpretation via labelling is the most commonly used methodology for NC interpretation. Scholars have proposed many inventories of semantic relations (Levi, 1978;Warren, 1978;Vanderwende, 1994;Lauer, 1995;Barker and Szpakowicz, 1998;Ó Séaghdha, 2007;Rosario et al., 2001;Tratz and Hovy, 2010;Fares, 2016;Ponkiya et al., 2018a). A recent FrameNetbased inventory by Ponkiya et al. (2018a) proposed FEs (Frame Elements) from FrameNet as labels (or, semantic relations). They released a dataset by annotating each noun compound with a frame and a frame element; and proposed this annotation for predicate 'nominalization'. However, it also works for most of the cases of 'predicate deletion'. For automatic labelling, Dima and Hinrichs (2015) and Fares et al. (2018)'s architecture is similar to ours. Dima and Hinrichs (2015) proposed a feed-forward neural network-based approach. This network takes concatenated embeddings of component nouns as an input and predicts one of the labels from the Tratz and Hovy (2010)'s label set. Fares et al. (2018) used a similar feed-forward network to predict two types of relations. This network, however, shares initial layers and separates output layers for each label type. NC interpretation via paraphrasing is another methodology that contains approaches such as prepositional and free paraphrasing. Prepositional paraphrasing, i.e., paraphrasing using a preposition, for example, student protest: "protest by student(s)", is a relatively well-attended problem (Lauer, 1995;Lapata and Keller, 2004;Ponkiya et al., 2018b). All the above approaches proposed for prepositional paraphrasing use the fixed-set of eight prepositions proposed by Lauer (1995). The other set of approaches, i.e., free paraphrasing, however, has not received much attention. Apart from two SemEval tasks (Butnariu et al., 2009;Hendrickx et al., 2013), it does not have much literature available. A recent study (Ponkiya et al., 2020) expresses paraphrasing as a "fill-in-theblank" problem, and utilizes pre-trained language models, for the task of noun-compound interpretation. Foundations Levi (1978) performed a linguistic study to understand how noun compounds are generated. They call such compounds nominal compounds. This theory puts nominal compounds into two categories, based on the compounding process, as 1. Predicate Deletion: Here, a predicate between the components is dropped to create a compound. For example, apple pie is a "pie made from apple." The predicate made from is dropped in this case. Similarly, for elbow injury, gas pipeline, etc. Predicate Nominalization: Here, the head noun is a nominalized form of a verb, and the modifier is an argument of the verb. For example, "The union demonstrated against the price hike. . . " becomes "The union demonstration against the price hike. . . " Verbal noun as head: student demonstration, government approval, opposition objection, etc. Verb form as head: student protest, government support, competition schedule, etc. Levi (1978) also proposed a set of abstract predicates 1 for the former category, but no relation for the latter category. Later,Ó Séaghdha (2007) revised this inventory and proposed a two-level hierarchy of semantic relations. Ponkiya et al. (2018a) proposed a method to use FrameNet based labels for noun compounds. Here, the head noun invokes a frame, and the modifier noun fits in one of the slots of the frame. They also prepared a dataset by annotating each noun compound with a frame and a frame element. Ponkiya et al. (2018a) proposed this annotation for predicate nominalization, which also works for most cases of predicate deletion. FrameNet FrameNet 2 (Baker et al., 1998) is a taxonomy based on Fillmore's theory of Frame Semantics. This theory claims that most words' meanings can be inferred based on a semantic frame: a conceptual structure that denotes an abstract event, relation, or entity and the involved participants. For example, the concept of questioning involves a person asking a question (SPEAKER), person/people begin questioned ADDRESSEE, the content of the question MESSAGE, and so on. In FrameNet, such a concept is represented by QUESTIONING frame. The participating entities, such as SPEAKER, ADDRESSEE, MESSAGE, etc., are called frame elements (FEs). Such frames are invoked in running text via words known as lexical units. Some of the lexical units for the QUESTIONING frame are ask, grill, inquire, inquiry, interrogate, query, etc. FrameNet data provides two types of linkages between entities: (a) relations: linking among frames or among FEs, and (b) mappings: linking from words to frames and from frames to FEs. Relations FrameNet includes a graph of relations between frames along with relations among frames. Some of the important frame relations are: • Inheritance: close to a typical Is-A relation, e.g., In our work, we utilize Relations for the generation of frame and frame element embeddings ( §3.2). We, further, utilize the Mappings to prune the search space ( §4.1). Knowledge Graph Embeddings A Knowledge Graph G is a set of relations R defined over a set of entities E. Formally, it is comprised of a set of N triples (h, r, t), where h and t are called head and tail entities, and r denotes a relation among them. Knowledge Graphs are widely used to store knowledge in a structured format, and they play an important role in representation learning. Methods for learning representations for both entities E and relations R have been explored (Wang et al., 2017) with an aim to represent graphical knowledge. Various algorithms for representation learning have been proposed, which help tasks such as link prediction etc. TransE (Bordes et al., 2013) is a method that models relationships by interpreting them as translations operating on the lowdimensional embeddings of the entities. We use the ConvE (Dettmers et al., 2018) algorithm to get embeddings of frames and frame elements. For the training of ConvE, we treat all relations from FrameNet as triples of a knowledge graph. ConvE Convolution-based Embeddings is a multi-layer 2D-convolution network model proposed by Dettmers et al. (2018). It usages fewer parameters, yet efficient compared to similar models. It defines the scoring function (for each relation r) as follows: where, e h , e r and e t are embeddings of head h, relation r and tail t, respectively, x denotes reshaping of vector x to a matrix, f is a rectified linear unit (relu) function, vec converts a matrix into a flat vector, w is convolution kernel, and W is the parameter of a fully connected layer. For training, it applies logistic sigmoid function σ(·) to the scores, and minimize the binary crossentropy computed using the following formula: where, p = σ(ψ r (e h , e t )) and t is 1 when (h, r, t) ∈ G, 0 otherwise. ConvE uses two embedding layers: one for entities and the other for relations, which initializes the embeddings layers randomly. The embeddings layers get updated during the training. At the end of the training, the embedding layers contain the embeddings for entities and relations. Our Approach FrameNet has 1223 frames and 11,473 frame elements. However, the existing dataset for FrameNetbased noun compound interpretation does not have examples for many frames and frame elements. However, unlike other relation inventories, we have FrameNet taxonomy, which can help in building a better model. We first explain our frame prediction approach and then extend the same for FE prediction. System Architecture We encode a given noun compound nc = w 1 w 2 (say, divorce rate) using a feed-forward network to get vector v nc . Using FrameNet API, we create a set of candidate frames that can be invoked by w 2 (rate → {ASSESSING, PROPORTION, SPEED DESCRIPTION, etc.}). For each candidate frame f i , we take its frame embedding e f i from the frame embedding layer. We take the dot product of v nc with embedding e f i of each frame f i to compute the score for the frame. For testing, we use the following formula to predict a frame: Figure 1: Basic system architecture illustrating frame prediction for divorce rate. (2015) and Fares et al. (2018) use a simple feed-forward network. In our model, if we remove the frame/FE embedding matrix and use them as weights of one more dense layer (after the "fully-connected layers"), our model becomes identical to theirs. In doing so, (a) our model will NOT need any extra computation to compute the score of labels that are NOT part of the candidate set, and (b) the back-prorogation does not have to pass through an additional layer which might not be effective. Dima and Hinrichs We implement these models in PyTorch (Paszke et al., 2017). We initialize the word embedding layer with Google's pre-trained embeddings 3 and initialize the frame embedding layer with random values, in one case, for baseline, and pre-trained frame embedding, in another case. We use the same architecture to train another model for FE prediction, replace the frame embedding layer with an FE embedding layer, and candidates FEs are the FEs from all candidate frames. We take all FEs as a candidate set if no such mapping is found. Frame and Frame Element Embeddings Inspired by Kumar et al. (2019)'s approach for the task of Word Sense Disambiguation (WSD), we propose a similar approach to perform NC interpretation. Our approach uses the definition of entities (along with the relations) to learn entity embeddings and relation embeddings. It uses an encoder (Bi-LSTM) to encode the definition of an entity and uses encoded representation as an embedding of the entity for ConvE. During the training, it also optimizes both: the encoder and ConvE. After the training, the encoding of definitions is taken as entity embeddings. We train ConvE twice to get frame and frame element embeddings separately. ConvE training is independent of the main training. Experimental Setup In this section, we explain our dataset, baseline, training, and evaluation metrics. Dataset Creation and Analysis We use the dataset released by Ponkiya et al. (2018a) as D1. The dataset contains 1546 nounnoun compounds with two labels: frame and FE. The dataset was created by extracting noun compound along with labels from the FrameNet data. As the extraction is automatic and the manual step only confirms the correctness of the labelling, the 3 https://code.google.com/archive/p/word2vec/ labels are not exhaustive. For instance, a noun compound student demonstration has been annotated with PROTEST:PROTESTER. However, the following labels are also applicable: REASONING:ARGUER and CAUSE TO PERCEIVE:ACTOR. So, we annotate more examples with all possible labels. Manual Annotation We manually annotate 326 noun compounds, and call it D2. We extend D1 by merging these examples from D2 to perform our experiments. The annotation is performed by one of the authors and hence does not warrant discussion on the interannotator agreement. However, please allow us to point out that our annotations are still manually performed by a human, which begets the consideration of these annotations to be gold-standard. The author chose the examples from Tratz and Hovy (2010)'s dataset randomly. During the annotation process, we found some difficulties because of the coverage issue of the FrameNet. The wordto-frame mapping in FrameNet has a coverage issue, and it has been widely reported in the literature (Pavlick et al., 2015;Botschen et al., 2017). We categorize the coverage issues into the following: No Candidate Frames: The word-to-frame mapping returned no candidate frame. In some cases, we could find a frame with manual effort (ref . Table 1). However, despite manual efforts, some cases, we could not find an appropriate frame all the time (e.g., star autograph, employee misconduct, etc. and CHANGE OF TEMPERATURE. However, none of the two frames is appropriate for body heat. In some cases, we could find an appropriate frame that was not a part of the candidate set. For example, for noun compound ulcer drug, candidate frames are INTOXICANTS and CAUSE HARM, but the appropriate frame is CURE. No Suitable Frame Element in the Frame: We could find an appropriate frame, but no frame element from the frame is appropriate. For instance, the BUSINESS frame is suitable for retail operation, but no frame element from the frame is suitable for the modifier noun retail. NC Annotation Tool To handle the first two cases (finding of a frame), we use synonyms from WordNet (Miller, 1994) and FrameNet+ data (Pavlick et al., 2015). To simplify the annotation process, we develop a tool ( Figure 2) that makes the annotation process easier. We split each dataset -D1 and D1+D2 -randomly for 5-fold validation. Each fold contains three disjoint sets: training set (60% compounds), validation set (20% compounds), and test set (20% compounds). We use the same folds across all ex-periments, so results across different models are comparable. Frame and Frame Element Embeddings To get frame embeddings, we consider frames as entities and frame relations from FrameNet as relations between the entities. Then we train ConvE ( §4.2) to learn frame embeddings. We use these entity embeddings to initialize the frame embedding layer. Table 2 shows the ten most similar frames for FRIENDLY OR HOSTILE frame based on cosine similarity between frame embeddings. Similarly, we get embeddings of frame elements using frame elements and their relations in FrameNet. Baseline The first baseline is a random prediction: the probability of predicting a label from a candidate set is uniform. We take expected counts to compute metrics. For instance, we compute random accuracy using the following formula: We also use Support Vector Machines (SVM) (Cortes and Vapnik, 1995) as a baseline approach for this task. We use the sklearn library (Pedregosa et al., 2011) this approach. The input for the SVM-based approach is the concatenated vector of individual lexical units. We provide results for another baseline approach where we use the same architecture ( §4.1) with random initialization for frame/FE embeddings. Training Given a noun compound, we get candidate labels using FrameNet mapping. We compute scores for candidate labels and compare them with the target to compute loss value. We minimize categorical cross-entropy with stochastic gradient descent (with momentum). Frame/FE embedding layer remains fixed (non-trainable) for the initial few epochs. For stopping criteria, we monitor performance on the validation set. Evaluation We report weighted Precision, Recall, and F1score for our experiments. The weight values for each label is in proportion to the number of test examples for the label. Following is a formula for computing (weighted) precision: where, P l is the precision score, TP l is the number of true-positives and FP l is the number of falsepositives for a label l. N l is the number of instances with label l in the test set, and N is the total number of instances in a test set. The above metrics are based on the top prediction. We also report accuracy at k, which treats a prediction as a true prediction if the correct label is in top k predicted labels. We report (microaveraged) accuracy, computed using the following formula: Acc = No. of correctly classified instances Total instances in test-test (7) We compute all of these metrics for frame and frame element independently. We use the Scikitlearn library (Pedregosa et al., 2011) to compute all of these metrics. Results and Analysis The reported results are averaged across 5-fold cross-validation. We define a subset of all test samples, whose output label does not have any samples in the training set, as unseen-set. In a fold (of D1) with 310 test samples, the following are the statistics about the "unseen set": (1) 30 unique frames, covering 32 test samples, (2) 75 unique FEs, covering 82 test samples. These cases are challenging to handle. Our prediction in continuous space helps in such cases, as the target space embeds the labels. Table 3 reports the performance of the baselines compared to our system for frame prediction on the entire test-set. Our system beats the random baseline and SVM model across all metrics by a significant margin. Similarly, as observed from Table 4, it can be seen that frame prediction on the "unseen" set improvises over the random baseline in both the cases, viz., WITH and WITHOUT frame embeddings. The performance improvement is quite significant over both the datasets (D1 and D1+D2). We attribute the performance improvement in both cases to the fact that our model captures frame semantics for a frame in a continuous space, better than the baseline metrics. However, as seen in Table 3, the model which tries to predict WITHOUT frame embeddings shows comparable results with the model which predicts WITH frame embeddings. This marginal improvement does not seem to be significant, and the results for both these cases are almost similar. Eventually, with the improvement in dataset size, we expect the system with frame embeddings to perform better than the system without frame embeddings. We discuss the task of frame prediction above and present our results for frame element prediction here. As it can be observed from Table 5, random baseline and SVM based models are outperformed for the task of frame element prediction as well, when compared with the results of our methodology. The improvements over both datasets (D1 and D1+D2) are at least 5% (D1+D2/SVM). With the extended dataset (D1 + D2), the performance of our approach shows an improvement across all three (precision, recall and f-score) measures. In Table 3, for frame prediction, we observe an increase in the stronger baseline score (SVM) when the extended dataset is used. However, we observe that our results for frame element prediction show that the model which uses frame embeddings is significantly outperformed by the model which does not use frame embeddings. Upon manual analysis of our train set, we find that our datasets have multiple cases where the number of examples per frame element is very few (sometimes even 1, as discussed below). This results in a data skew where the sample would either be used for training or testing, thus rendering the model either untrained for that test case, or no testing of the model trained for that single frame element. In Table 6, we see that the random baseline outperforms our method because of the data skew discussed here. There are multiple frame elements that are present in the unseen test data for which the model has not been trained at all. We do not report SVM performance in Table 6 and Table 4, since the precision, recall, and f-score for SVM were all 0. This performance can be attributed to the fact that SVM does not perform well with unseen examples, and in this case, does not perform at all. For frame element prediction, in Table 5, we observe that the extended dataset helps improve the over quality of predictions with an improved score for each approach, including the baseline. These results signify that the dataset extension does indeed help the task of NC interpretation. In Figure 3, we see that the average number of candidate FEs for a test sample is only 26. Without even a single training example, our system is able to correctly predict 51.22% (more than half) of unseen samples among the top-7 predictions, which is higher than the top-1 accuracy of any baseline approach on the complete test-set. With an increase in k, the margin between the performance of our system and baseline remains nearly the same on the whole test set. However, on unseen labels, the system with FE embeddings significantly outperforms the baseline, with an increase in k. Overall, we observe a significant improvement in results with the help of our method. We also show that our extended dataset does help improve the performance of models in both cases. Conclusion and Future Work In this paper, we proposed a novel method for using FrameNet for NC interpretation in a continuous space. We use FrameNet mappings (word to frame and frame to frame element) to prune our search space. Our approach -prediction in continuous space -outperforms the random baseline and a stronger baseline approach. We show that the label embeddings generated using our approach help in the generalisation over unseen labels. We annotated more noun compounds and analysed the issue in finding frame and frame elements. We create and release a tool that assists annotators in frame identification, for further research. We also show that extending the dataset created with our tool improves the system performance. Our experiments evaluate our proposed method on a small annotated dataset compared to the overall number of labels. We extend this existing dataset by annotating more NCs for various labels. Our study on the coverage issue for the annotation process helps develop a tool that assists the annotators in finding an appropriate frame. We provide promising results for the task of frame prediction. We analyse our results and discuss them in detail with respect to both frame and frame element prediction tasks. In the future, we aim to find other ways of using FrameNet data for this task. We would also like to investigate why our approach provides promising results for the task of frame prediction but not for frame element prediction. We would like to explore more approaches to predict the frame elements effectively. We believe that using FrameNet embeddings can prove to be helpful for other tasks.
6,116.6
2021-08-01T00:00:00.000
[ "Computer Science" ]
Effect of Selenium on HLA-DR Expression of Thyrocytes Autoimmune thyroid diseases (ATDs) represent the most frequent forms of the organ-specific autoimmune thyroid disorders that result from interaction between genetic and environmental factors. Selenium has been shown to exert a beneficial effect on the autoimmune thyroiditis. In spite of therapeutical effect of selenium on autoimmunity, the mechanism of its action has not been revealed. Objective. To determine whether selenium in vitro thyrocytes cultures are able to influence the HLA-DR molecule expression of human thyrocytes and production of free oxygen radicals. Method. Thyrocytes were prepared from human thyroid gland and cultured in vitro in the presence of interferon-γ and sodium selenite. The expression of HLA-DR molecules induced by interferon-γ in the presence of sodium selenite of various concentration was measured by fluorescence-activated cell sorter. Results. Selenium has a dose-dependent inhibitory effect on the expression of HLA-DR molecules of thyrocytes induced by interferon-γ. This effect of selenium was in the inverse correlation with antioxidative capacity. Conclusion. Beneficial effect of selenium on autoimmune mechanism is a complex mechanism in which the inhibitory effect on HLA-DR molecule expression and antioxidative capacity are involved into therapy of autoimmune thyroiditis. Introduction Recently published clinical studies on possible effects of selenium (Se) in autoimmune thyroiditis evoked exciting discussion. The conflicting data were published on effect of Se; the one part of investigators provided evidences that Se intakes may be beneficial with respect to autoimmune diseases [1][2][3][4][5][6][7], the others were not able to show the significant effect of Se on autoimmune thyroiditis [8,9]. Furthermore, the authors who published the beneficial effect of Se on the levels of autoantibodies, advised to use Se therapy for patients with autoimmune thyroiditis (AIT) [1,4,10]. Recently, we published our prospective placebo-controlled prospective study including 132 patients with autoimmune thyroiditis [4]. L-thyroxine substitution therapy was made in both groups and the level of TSH remained in the normal range. Se therapy was continued by L-seleno-methionine (per os 2 × 100 μg/day) for one year. The level of Se in the untreated patients' sera was significantly lower than in treated patients and controls, and after three-month therapy with it, Se was normalized. The titer of antithyroid antibodies (mostly the anti-TPO) significantly decreased at the end of study. An inverse correlation was found between antioxidant capacity and level of anti-TPO antibodies. This observation suggests that Se deficiency by itself might be responsible for the precipitation of inflammatory process. Although the precise mechanisms of action and the possible targets of Se have not been clarified yet, the beneficial influence of Se can be explained by different points of views. Growing evidence supports that the selenium-containing enzymes and their antioxidant capacity somehow modify the autoimmune mechanism [11][12][13][14][15]. Previously, it was published that unlike in thyroids from healthy individuals thyroid epithelial cells from patients with AITD were able to express HLA class II antigen molecules similar to normally expressed on antigen presenting cells (APCs) such as macrophages and dendritic cells [16][17][18][19]. The aberrant expression of HLA Class II molecules on thyroid cells may initiate and perpetuate thyroid autoimmunity via direct autoantigen presentation. We removed the repeated references from the highlighted part. Please check similar 33.2 ± 14.7 IFN-γ (100 U/mL) + sodium selenite (50 nM/mL) (n = 3) 26.4 ± 12.7 IFN-γ (100 U/mL) + sodium selenite (100 nM/mL) (n = 3) 11.5 ± 5.2 cases throughout the paper. Previously we provided evidence for the role of HLA-DR expression on thyrocytes induced by interferon-γ (IFN-γ) and its modification by methimazole which has a significant anti-oxidative capacity [20]. It was assumed that the Se, like methimazole, can modify the expression of HLA-DR molecules on thyrocytes culture in the presence of Se; therefore, we made in vitro experiments using human thyrocytes cultures to study this hypothesis. Materials and Methods We cultured human thyrocytes and analyzed HLA-DR antigen expressions induced by IFN-γ in the various concentrations of sodium selenite (Sigma) in culture media by previously published method [16]. Briefly, thyroid epithelial cells were separated from surgical specimens. 4-6 × 10 6 cells were obtained from 10 g of tissue with viability of >90% which was determined by trypan blue exclusion. 2 × 10 5 cells were placed in each well of a 24-well Costar culture plate and cultured in minimum essential medium containing 15% fetal calf serum (FCS) with 0.2% sodium bicarbonate either alone (control wells) or in the presence of IFN-γ (Hoffmann-La Roche), and to other wells 10.0, 50.0, and 100 nmol/mL of sodium selenite (Sigma) were added. In most of experiments, thyrocytes were cultured for 3 days and then detached by 0.2% trypsin. HLA-DR expression was investigated initially (0 day) and on day 3 and 7 of culture. Cells were recovered in Ca ++ and Mg ++ free EGTA solution with rubber policeman. The detached cells were resuspended in RPMI containing 10% FCS, 10 mM HEPES (Sigma). Results We found that IFN-γ (100 U/mL) was able to induce a significant stimulation of expression of HLA-DR molecules in thyrocytes (Table 1) (35.2 ± 15.2 versus 3.7 ± 2.4, P < 0.001). The peak of HLA-DR expression was at day three and then decreased abruptly. Therefore, we tested the expression of HLA-DR positive cells induced by IFN-γ at day three in absence and presence of Se of various concentrations. Se in two different concentrations (50 nM/mL and 100 nM/mL, resp.) significantly inhibited the expression of HLA-DR positive cells induced by IFN-γ (Table 1). If we added the Se to thyrocytes cultures after or before exposition of IFN-γ, there were not observed significant changes in HLA-DR expression. Time-dependent effect of sodium selenite (100 nM/mL) on IFN-γ-induced HLA-DR expression(100 U/mL) was on Discussion The trace element of Se plays an important role in the thyroid gland under physiological conditions and in diseases as well. Se supplementation decreased inflammatory activity in patients with autoimmune thyroiditis, and the reduction of titres of anti-TPO antibodies was correlated with serum levels of Se [2,4,6,7]. Convincing observation was published for beneficial effect of Se in a patent with autoimmune thyroiditis when a marked decrease in thyroid 18FDG uptake after Se supplementation was found [21]. In spite of great efforts, the precise mechanism of Se has not yet been clarified. The role of antioxidant property of Se was published to be involved into beneficial effect in autoimmune thyroiditis [12][13][14][15]. Previously, we found that methimazole proved to have antioxidant capacity decreased the expression of HLA-DR molecules on the surface of thyrocytes [20]. Our experiments confirmed that the Se has a significant radical scavenging effect and the decrease the expression of HLA-DR molecules induced by IFN-γ was in an inverse correlation with antioxidative capacity of thyrocytes supernatant. The exogenous factors including iodine and oxidative stress have been published to be precipitating factors in genetically susceptible individuals [5,14,[22][23][24][25][26]. The antigenicity of thyroid autoantigens (thyroglobulin and TPO) is increased after iodine exposition. The iodine is able to increase the amount of free radicals which are produced in the process of physiological hormonogenesis in the thyroid gland. In addition, there are accumulating data for antiviral capacity of Se. Both epidemiological and in vitro data demonstrated that Se deficiency might be important in viral infections as well [11]. Since the viruses have been published to induce IFNγ, consequently HLA-DR expression, therefore, it is hypothesized that the trigger in autoimmune thyroiditis might be a virus infection [27][28][29]. Nowadays, the suggestion of viral origin of autoimmunity appears to be a speculation; however, the "selenium story" might open a new window not only for better understanding of beneficial effect of Se in autoimmune thyroiditis but also in the research of origin of autoimmunity [9, 11-13, 15, 22]. A new perspective has been opened by the investigations of Se on the role of T regulatory cells (Treg) with CD4CD25 FoxP3 markers [24,25,30,31]. Accumulating data demonstrated that deficiency of CD4 + CD25 + Treg cells was closely correlated with development of ATD [24,25,30,31]. Recently, it was published in animal experiments that the CD4CD25 FoxP3 T cells displayed preventive effect on development of ATD [26]. Surprisingly, Se upregulated CD4 + CD25 + regulatory T cells in iodine-induced autoimmune thyroiditis model of NOD.H-2 h4 mice [24]. Our observation and experiments provide evidence that Se has a complex effect on immune system including decreased expression on HLA-DR molecules and by this way can prevent the induction and perpetuation of autoimmune thyroid processes. Conclusions Se has a dose-dependent inhibitory effect on the expression of HLA-DR molecules of thyrocytes induced by interferonγ. This effect of selenium was in the inverse correlation with anti-oxidative capacity. Inhibitory effect of Se on HLA-DR molecule expression and antioxidative capacity is involved into therapy of autoimmune thyroiditis. Our in vitro study provided evidence that the free radical scavenging effect of Se plays an important role in the therapy and the prevention of autoimmunity.
2,045.4
2012-02-02T00:00:00.000
[ "Biology", "Medicine" ]
Integrity of Financial Statement Factors: Intellectual Capital, Independent Commissioner, and Company Size , Introduction There is increasing demand on real estate company management to uphold the integrity of financial reporting in this increasingly complicated and dynamic business environment.The continuity of the business cannot be compromised by the lack of integrity in financial reporting, as it directly impacts corporate strategy, access to financial resources, investor and stakeholder confidence, strategic decision-making, and company reputation.Assessing the accuracy of financial accounts is crucial for lenders, investors, and other stakeholders who wish to make loans, purchase goods or services, or engage in other financial dealings with businesses (Suzan & Bilqolbi, 2023).Financial statements that are considered to have integrity must meet the criteria including understandability, relevance, accuracy, honesty, objectivity, neutrality, consistency, completeness, and balance (Monteiro et al., 2022).Aside from that, poor quality financial reporting can occur because the financial reporting system is less integrated, less efficient, and has not achieved optimal access (Meiryani et al., 2020). Agency theory has significant implications for the integrity of financial accounts.Understanding how the business deals with its shareholders, including financial sources like institutional and individual investors, is made easier with the use of agency theory (Hoesada & Pradika, 2019).A well-informed management team will make every effort to provide accurate information via high-quality financial statements.This is to guarantee that investors or owners may have faith in the company's ability to continue operating (Sormin, 2021).A business can evaluate the accuracy of a firm's financial statements by using the Beaver and Ryan model, and in particular, the Market to Book Value ratio as calculated by (Meiryani et al., 2023;Ulfa & Challen, 2020).The integrity of financial statements is calculated by dividing the stock market price formula by the stock's book value (Ulfa & Challen, 2020).An effective method for determining the reliability of an accounting financial statement is the Market to Book Value ratio, provided the result is greater than one.The discrepancy between the book and reported values of a corporation is the main cause of this occurrence.Therefore, goodwill and inflation do not affect the reported value of assets since assets bought in a particular year are recorded based on the original purchase price. According to (Rizky et al., 2018), there is a trend in Indonesia toward increased focus on the property and real estate subsector, which is being pushed by the market's anticipated potential for expansion.This growing interest is driven by the ongoing trend of asset prices rising, which encourages a wide range of stakeholders to participate actively in the real estate and property subsector.Moreover, it is important to highlight that the property and real estate subsector is crucial to supporting the country's economic growth, both directly by contributing to GDP and indirectly by encouraging the development of important infrastructure projects (Rizqi & Anwar, 2021).Between 2018 and 2022, the GDP growth rate within the property and real estate subsector is projected to sustain an elevated trajectory.This sustained annual growth, coupled with an expanding contribution to the GDP, underscores the attractiveness of investing in this particular subsector.However, heightened attention has been drawn to Indonesia's property and real estate subsector due to a spate of fraud cases.The substantial influx of investors into various projects, including apartment buildings, bridges, and office spaces, has consequently engendered potential vulnerabilities for fraudulent activities in financial reporting (Primawati & Suprantiningrum, 2018).In fact, some companies have published financial information that does not reflect a high level of honesty and objectivity.In fact, in several cases in Indonesia, the financial reports presented do not meet the expected integrity standards.This situation creates uncertainty and doubt for parties who rely on company financial information.The lack of openness and transparency in financial reporting only makes the company's situation worse. The overall market-to-book value ratio research shows that companies listed in the IDX within the property and real estate subsector have demonstrated a noticeable deterioration in financial reporting integrity between 2018 and 2022.During this time, there was a discernible increase in the quantity of businesses that submitted false financial statements.Stakeholders strongly depend on the accuracy of the financial data these organizations give, as correct financial records have a substantial impact on their confidence.As such, companies that continuously violate the integrity of their financial reporting run the immediate risk of undermining stakeholder confidence, which might have a negative impact on their operations and long-term sustainability.estates listed on the Indonesia Stock Exchange (BEI) from 2018 to 2022 based on market book value can ensure the completeness of their financial reporting with integrity, in 2018 there were 20 companies with a financial report integrity presentation of 47per cent, in 2019 there were 14 companies with financial report integrity presentation was 33per cent, in 2020 there were 12 companies with a financial report integrity presentation of 28per cent, 10 companies with a financial report integrity presentation of 23per cent in 2021, and 8 companies with a financial report integrity presentation of 19per cent in 2021.2022.Overall, integrity in financial reporting appears to have decreased from 2018 to 2022 recorded in this subsector.Meanwhile, the number of companies whose financial reports lack integrity increased from 2018 to 2022.In certain situations, having a sizable intellectual capital is essential.This intellectual capital is very important since it has the potential to significantly improve the business's financial performance and provide it with a significant competitive edge (Teoh et al., 2024).Data about current flows of intellectual capital and corporate initiatives to preserve intellectual capital to create value may be found in the intellectual capital information (Astuti et al., 2021).In light of contemporary worldwide economic patterns, intellectual capital has emerged as the paramount resource for guaranteeing uninterrupted commercial operations within the organization (Hapsari et al., 2021).To guarantee the authenticity of financial accounts, it is essential to take into account the number and experience of independent commissioners since their capacity to reveal financial information has been compromised.An independent commissioner is a person who satisfies the requirements for the position and joins the board of directors without having any affiliation with a publicly traded firm.The independent board of commissioners is in charge of assessing the organization's general performance (Fahlevi et al., 2023).A corporation's size is measured by its total assets, revenue, and market capitalization, which together form its company size (Abbas et al., 2021).The arrangement of the financial statements varies based on the size of the company.Size of the company has a significant effect on financial data manipulation.The goal of this research is to ascertain whether the worth of an organization's intellectual capital, the quantity of independent commissioners, and the accuracy of its financial accounts are correlated. Literature Review Agency Theory.Agency theory says that agency issues will occur if there is a division between the management acting as the business's agent and the owner acting as its principle because the two have distinct objectives or interests (Himawan, 2019).In agency theory regarding financial reporting integrity, all stakeholders tend to act in their best interests.In this context, management who understands the company's situation as a whole will strive to provide accurate information through quality financial reports.This is to ensure that owners or investors can be confident in the continuity of company operations (Sormin, 2021). The quality and integrity of financial reporting is often influenced by the interaction between the business owner (principal) and management (agent).When there is asymmetric information or it is not properly understood by the principal as the user of the information and the agent who provides the information, agency conflict can occur.Management will hold more information about the state of the company compared to business owners.Therefore, management supervision will become more difficult for the principal.However, managers can use this condition for certain purposes according to their own interests, thereby presenting financial reports that do not have integrity.Large companies tend to provide more information than small companies in the context of company size.This may increase the possibility of information asymmetry conflicts between agents and principals.This indicates the relationship between agency theory and company size.By utilizing intellectual capital, the role of independent commissioners, and showing growth in company size, it is hoped that the integrity of the company's financial reports can be well maintained. Financial Statement Integrity.Financial statement integrity is a structured presentation of relevant financial statements, consistency, values, methods, and principles that are presented honestly in financial reporting to provide benefits to stakeholders related to integrated financial statements (Nurbaiti & Putra, 2022).It is very important to present accounting information honestly, reliably and with high integrity.This action allows users of accounting information to trust the data when making decisions (Atiningsih & Suparwati, 2018).Financial information is largely used as a powerful instrument by decision-makers to facilitate economic decision-making (Mertzanis et al., 2020).Integrated reporting aims to provide a thorough depiction of the Company's process of generating value (Stacchezzini et al., 2019). H1: Property and real estate companies listed on the IDX will have challenges from 2018 to 2022 related to intellectual capital, the quantity of independent commissioners, and the firm's size, all of which will impact the financial statements' trustworthiness. Intellectual Capital.Intellectual Capital (IC) refers to the knowledge and information used in work activities to produce value (Febrilyantri, 2020).Intellectual capital is an intangible asset that might increase a company's profitability and competitiveness.According to (Suharman et al., 2023), human, structural, and relational capital should be included when evaluating intellectual capital.According to (Meramveliotakis and Manioudis, 2021), the value of intellectual capital lies in its ability to increase employee productivity as well as organizational output.The effect may also include the way organizations communicate, facilitating the provision of comprehensive corporate information via financial statements that attract investors.Acknowledged as an indicator of knowledge, intellectual capital has transformed the conventional approach to creating value and is now the primary driver of business growth (Dai et al., 2021).According to earlier studies, financial statements are more credible when they include intellectual capital (Hia & Kusumawardhani, 2023).Intellectual capital refers to the investment made in return for work, and it positively affects the accuracy of financial accounts According to (Meiryani et al., 2023), the price-to-book ratio is a statistic used to assess the value of a firm.The use of intellectual resources, participation of independent commissioners, and presentation of the firm's evolution may all contribute to the effective preservation of the financial statements. H2: Intellectual capital contributed to a slight improvement in the Integrity of Financial Statements for listed property and real estate enterprises on the IDX between 2018 and 2022. Independent Commissioners.Independent Commissioners are people on the board of commissioners who are not affiliated with the internal firm.Their responsibility is to evaluate and disseminate information on management performance overall (Pratika & Primasari, 2020).It's important for commissioners to have a high level of integrity and remain independent, this aims to ensure that Independent Commissioners are not easily influenced by company management and to achieve efficiency and effectiveness in their supervisory duties (Fatin & Suzan, 2022).An organization's structure and fundamental requirements are closely related (Meiryani et al., 2020).A company's financial statements are probably going to be exceptionally honest if it has a competent independent commissioner.A corporation's financial statements integrity may be substantially improved by an independent commissioner (Abbas et al., 2021).Theoretically, a company with a high number of independent commissioners would be subject to more scrutiny, which would reduce the possibility of fraud or data manipulation in day-to-day operations and improve the accuracy and consistency of financial statements (Abbas et al., 2021).According to studies, financial reporting is much more trustworthy when independent commissioners are present (Lesmono & Setiyawati, 2023) and (Marlinda et al., 2022).By efficiently overseeing the financial statement production process, independent commissioners improve the financial statements' accuracy (Annisa & Muslih, 2023). H3: For real estate and property enterprises listed on the IDX, independent commissioners will have a significant and positive impact on the integrity of financial statements from 2018 to 2022. Company Size.Company size is an indicator used to classify companies based on the scope of their operations, thus allowing us to separate companies into two main categories, namely large companies and small companies.Companies can be assessed through company size which is obtained from the total assets listed in their financial reports A Company's size is determined by its total assets, revenue, or market capitalization (Priharta & Rahayu, 2019).The company's size must be emphasized in order to disclose trustworthy financial statements and integrity.Compared to smaller companies, larger organizations communicate more information regarding their financial reports (Permatasari et al., 2019).As companies expand, they become more diligent in managing their financial statements due to the presence of multiple stakeholders.According to studies, the larger a firm is, the more reliable its financial statements are (Nurbaiti & Elisabet, 2023).The level of honesty in the company's financial accounts will increase in tandem with its growth (Hoesada & Pradika, 2019).According to the (Hia & Kusumawardhani, 2023), study, a company's size significantly improves the accuracy of its financial accounts.The company's financial records show enhanced integrity in direct correlation with the company's expansion, as evidenced by its total assets. H4: Companies listed on the IDX that operate in the real estate and property subsector will do well to maintain trustworthy financial records from 2018 to 2022. Research Design and Method Population and Sample.From 2018 to 2022, this research examines the Indonesia Stock Exchange-listed realty and property firms.Researchers surveyed 215 people for this study using a planned sample strategy.A total of forty-three companies that met certain requirements were included in the sample from 2018 to 2022.These businesses were selected because of their track record of successfully investigating Indonesia Stock Exchange-listed real estate subsector companies.Variables and measures.Dependent Variable.Financial reporting honesty is the dependent variable under investigation.The variable denoted by the symbol Y reflects the fundamental rules that must be adhered to while compiling financial reports.To measure the financial report integrity variable, use the following formula (Meiryani et al., 2023). Intellectual Capital.Intellectual capital is measured using the value-added intellectual capital (VAIC) approach, which also takes structural, human, and employee capital value provided into account.To compute the intellectual capital variable, use the formula below (Ulum, 2017). 𝑠𝑖𝑧𝑒 = 𝐿𝑁 (𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡)……………………………………………………………. (4) Information: size: company size LN: the natural logarithmic value of the Company's assets Data Analysis.Panel data regression analysis and descriptive statistical analysis were applied in this work.The process of utilizing information collection, processing, and presentation techniques to describe and summarize data or the population as a whole is known as descriptive statistics (Sugiyono, 2019).This study contains metrics like mean, standard deviation, maximum, and minimum values and uses a ratio scale.The following equation represents the panel data regression analysis model: Descriptive Statistical Analysis.Descriptive statistics is a statistical aspect related to the process of describing and summarizing data or a population as a whole through the stages of collecting, processing and presenting information (Sugiyono, 2019).The application of descriptive statistical analysis in this research aims to analyze and describe data related to research variables Classical Assumption Test.The classical assumption test is also conducted since it is a crucial step in establishing the viability of using the regression model that was used in the investigation.However, since multicollinearity and heteroscedasticity adhere to the BLUE principle (best linear unbiased estimator), these are the only traditional assumption tests that are meaningful in panel data regression analysis (Ghozali, 2021). Hypothesis Test.The F test, t-test, and Coefficient of Determination Test (R2) may all be used to test hypotheses. The Coefficient of Determination (R2).The coefficient of determination (R2) illustrates how much variability in the dependent variable can be accounted for by variations in the independent variable (Sugiyono, 2019).A strong correlation between the independent and dependent variables is indicated when the value approaches one.The coefficient of determination ranges between 0 and 1 on a scalar scale. Simultaneous Test (F Test).Simultaneous hypothesis testing involves assessing the collective impact of multiple independent variables on a dependent variable.When developing hypotheses to evaluate their contemporary effects, the F test uses the following standards: (1) A probability value of less than 0.050 suggests that there is no significant influence of the independent variable on the dependent variable at the same time.(2) A simultaneous and statistically significant relationship between the independent and dependent variables is shown when the probability value is less than 0.050. Partial Test (T Test). The T-test is used in partial hypothesis testing, which evaluates the impact of particular independent factors on dependent variables without taking other independent variables into account (Sugiyono, 2019).The following standards are used for hypothesis testing: (1) If the probability value is less than 0.050, several partially independent factors may not have a meaningful impact on the dependent variable.(2) If the probability value is less than 0.050, the partially independent variable has a partially significant effect on the dependent variable. Statistical Result Outliers.From 2018 to 2022, 43 realty and property companies listed on the Indonesia Stock Exchange (BEI) provided 215 observational data points for the study.Unfortunately, only 38 organizations out of 148 were retained in the final sample after outlier correction. Descriptive Statistical Data Analysis.The process of characterizing and summarizing data or the population as a whole via the steps of information collection, processing, and presentation is known as descriptive statistics (Sugiyono, 2019).Descriptive statistics were used to outline each variable in this study.Utilizing a ratio scale, the primary metrics encompassed average (mean), standard deviation, maximum (value), minimum (value), and observations.By utilizing this ratio scale, the study incorporates factors such as mean, standard deviation, maximum, and minimum values.The results of tests evaluated through descriptive statistics are presented in the table 2: The mean financial statement integrity (MBVit) score for companies within the real estate and property subsector stands at 0.465, with a standard deviation of 0.245.This indicates that the financial report integrity variable averages 0.465, with a standard deviation of 0.245.Since the mean value of the financial report integrity variable exceeds the standard deviation, there is no clear indication of a trend within the data.Notably, instances of excellent integrity, numbering 64, and poor integrity, numbering 84, are distributed throughout the analysis.The highest recorded value for the financial report integrity variable, 1.275 in 2020, belongs to PP Properti Tbk, while the lowest, 0.106 in 2020, is attributed to PT Star Pacific Tbk. Intellectual capital (IC), the independent variable, has a mean of 10.408.Its standard deviation is 8.432.Of the 148 data points, 91 exhibit low and 57 exhibit high levels of intellectual capital.The maximum value for the intellectual capital variable is 49.571 owned by PT PP Properti Tbk in 2020, which shows that this value has a high level of acquisition of intellectual capital.Meanwhile, the minimum value obtained is -1.633 owned by PT Greenwood Sejahtera Tbk in 2022, which shows that this value has a low level of intellectual capital acquisition.With an average score of 0.428 and a standard deviation of 0.113, KI, the unbiased commissioner, comes in second.There are a minimum of 87 independent commissioners in 61 datasets and a large number in 61 datasets.The maximum value of independent commissioners obtained by PT PP Properti Tbk was 0.800 in 2019, this shows that the value of independent commissioners in this company is high, while the minimum value obtained by PT Intiland Development Tbk was 0.166 in 2018 and2019, this shows that the value of commissioners independence in the company is low. The company size (CS) has a mean of 29.304 and a standard deviation of 1.582, respectively.Following an examination, it was discovered that 92 records were connected to large corporations, while 56 records were related to small businesses.Data consistency over time is shown by the independent commissioners, business size, intellectual capital, financial statement integrity, and standard deviation variables, all of which have average values greater than the standard deviation.Company size values range from 23.192 in 2020 to 31.805 in 2022 for PT Bumi Serpong Damai Tbk, indicating a low value for company size, to 31.805 for PT Plaza Indonesia Realty Tbk, indicating a high value for company size. Classical Assumption Test.The multicollinearity and heteroscedasticity tests are used to assess the usual assumptions in panel data regression analysis.The multicollinearity test findings reveal that there is no discernible association among the independent variables or any signs of multicollinearity. Multicollinearity Test. Multicollinearity testing seeks to ascertain if the independent variables comprising the regression model are associated with one another or not (Ghozali, 2018).The correlation value for the examination of the variables intellectual capital (IC), independent commissioner (KI), and firm size (CS) is lower than 0.800.Heteroscedasticity Test.As a part of assessing classical assumptions, the heteroscedasticity test looks for evidence of non-uniform variance in the regression model's residuals across data (Ghozali, 2018).The absence of heteroscedasticity is shown by a chisquare probability value higher than 0.050.Heteroscedasticity is still going to be there even if the chi-square probability is less than 0.050.A heteroscedasticity test was conducted and found that the research data did not exhibit any evidence of heteroscedasticity.A Chi-Square probability of 0.084 is more than the significance level of 0.050. Selection of Panel Data Regression Models.Models for panel data regression were selected using the Chow and Hausman tests.Table 4 displays the outcomes of the Chow Test.The Chow test is used to determine whether to employ a fixed effect model or a common effect model.The cross-section F probability value in Table 4, which is 0.000 and less than 0.050, suggests that the model may be a fixed effect model.The Hausman test is used to determine which model is best. This study used the Hausman test to estimate panel data regression by looking at two models, one with a random effect and the other with a fixed effect.Statistically, cross-section F is not zero (p=0.003),according to the Hausman Test, which was conducted at the 0.050 level of significance.The results show that this inquiry is best served by using the fixed effects model. Panel Data Regression Equation .Based on data from the Indonesia Stock Exchange (IDX), Table 6 shows the impacts of company size, intellectual capital, and independent commissioners on the reliability of financial statements for real estate and property companies from 2018 to 2022.The following is a panel data regression equation using a fixed effects model. (MBVit) = 0.555 + 0.005 () − 0.366 () + 0.000 () + ………….(6) The panel data regression equation is concluded as follows: The fixed figure of 0.555 indicates that if the independent variables, such as intellectual capital , independent commissioners, and company size, are set to zero or constant, then the value of the dependent variable, namely financial report integrity, will be 0.555. The regression coefficient associated with intellectual capital stands at approximately 0.005.This suggests that a single unit rise in the intellectual capital variable, while holding other variables constant, will lead to a 0.005 increase in the integrity of financial statements. With all other factors held constant, a one-unit rise in the independent commissioner variable would result in a -0.366-point drop in the integrity of financial statements, according to the regression coefficient for independent commissioners. With a regression coefficient for business size hovering around 0.000, it may be inferred that there will be a 0.000 rise in financial report integrity for every one unit increase in company size, all other factors being held constant. A direct proportionality exists between IC and the correctness of financial statements, as per the panel data regression equation.There have to be independent commissioners to ensure that financial statements are reliable.Furthermore, when a company grows in size, the financial data becomes more accurate. Coefficient of Determination Test (R2). This research examines the relationship between business size, number of independent board members, and intellectual capital as it relates to the reliability of financial statements.The use of coefficient of determination analysis confirmed this.After doing the coefficient of determination analysis, we got these findings.According to the coefficient of determination test, the independent variable explains 78 per cent of the variation in the dependent variable.The remaining 22 per cent of the variance could potentially stem from several other factors that were not examined in this study. Simultaneous Test (F Test).Table 8 illustrates the use of the F test to determine the independent variable's significance regarding the dependent variable.Less than 0.050, or 0.000, is the probability value indicated by the simultaneous test results.Consequently, all of the research's independent factors impact the honesty of financial statements at the same time. Partial Test (t).By excluding additional independent variables from the partial test, we may learn how each one affects the dependent variable.Finally, the following might be a presentation of the test findings.The results of the previously described partial test allow us to derive the following conclusions.There is a possibility that the intellectual capital variable (IC) will attain a value of 0.043.This graphic illustrates that 0.043 is less than 0.050.Intellectual capital greatly improves the integrity of financial records.At 0.019, the independent commissioner variable (KI) probability is reached.As shown by this figure, 0.043 is less than 0.050.Financial statement integrity is significantly impacted by independent commissioners.The firm size variable (CS) has the potential to attain a value of 0.980.This value indicates that 0.043 is greater than 0.050.Consequently, there appears to be no association between the corporation's size and the reliability of its financial statements. Discussion The Influence of Intellectual Capital, Independent Commissioners, and Company Size on the Integrity of Financial Statements.From 2018 through 2022, the IDX will assess the size, intellectual capital, number of independent commissioners, and other metrics to determine if real estate and property companies' financial statements are credible.The F-test findings show that when thinking about the overall integrity of financial accounts, relevant factors include independent commissioners, firm size, and intellectual capital.The impact could be deemed statistically significant since the p-value is 0.000 at the 0.050 level of significance.This leads us to believe that hypothesis 1 is the best bet. The Influence of Intellectual Capital on the Integrity of Financial Statements. The second hypothesis was tested from 2018 to 2022, yielding results consistent with expectations.Real estate and property companies listed on the IDX exhibit diverse levels of intellectual capital affecting their financial records.The significant and positive impact of intellectual capital on financial accounting integrity is evidenced by the analytical coefficient of 0.005.The probability of this variable is 0.043, which is below the threshold of 0.050.This suggests that there is a possibility that it might compromise the accuracy of the financial statement.The results derived by (Hia & Kusumawardhani, 2023) from their previous work correspond to the outcomes of this investigation.This exemplifies how the use of expert knowledge, competency, and talent management enhances the efficiency of human resources.Precise financial documentation serves as an indication of a corporation's robust intellectual assets.Thriving firms exhibit exceptional levels of performance.Higher Value-Added Intellectual Coefficient (VAIC) indicates a shift towards more economical expenditure habits.When calculating VAIC (Castro et al., 2021), three criteria are taken into account: capital use, efficiency in human capital, and efficiency in structural capital.Companies that have good intellectual capital tend to achieve superior business performance, which is reflected in the integrity of financial statements. The Influence of Independent Commissioners on the Integrity of Financial Statements.The independent commissioners refuted the third assumption, providing evidence that the financial statements of real estate and property businesses listed on the IDX exhibited much greater accuracy from 2018 to 2022.The inclusion of an autonomous commissioner variable significantly weakens the integrity of financial reporting.This influence has a statistically significant probability of 0.019, which is less than the commonly accepted standard of 0.050.The user did not provide any text.The coefficient associated with this variable is 0.366.Instead of supporting the researchers' premise, the study's results validate the findings of (Meiryani et al., 2023).This implies that having a majority of independent commissioners might result in inadequate supervision, heighten the probability of conflicts of interest, ignore internal hazards, and have a detrimental effect on the accuracy of financial statements.There is a possibility that the ability of the board of commissioners to carry out supervision over corporate governance could be weakened by problems in communication, coordination and decision making.The presence of a very dominant independent commissioner can lead to excessive dependence on external entities.Independent commissioners can be influenced by management to manipulate financial reports which makes them no longer independent, the potential for conflicts of interest increases, and there is a lack of in-depth understanding of internal risks, which can harm the integrity of financial statements. The Effect of Company Size on the Integrity of Financial Statements.Therefore, the fourth hypothesis is refuted.An analysis of data from the Indonesia Stock Exchange (IDX) between 2018 and 2022 reveals an interesting correlation between the quantity of firms in the real estate and property subsector and the caliber of their financial reports.This link, while modest, is statistically significant.A coefficient of 0.000 and a probability of 0.980 greater than 0.050 suggest that variations in the company's size do not significantly impact the precision of its financial records.The study's results support the examination conducted by Abbas et al., rather than the first idea proposed by the researchers in 2021.This illustrates that the validity of a company's financial reporting is unaffected by its size, irrespective of its magnitude.The size of a corporation has little influence on the accuracy of its financial records, regardless of any assertions suggested otherwise.While not all big firms have extensive experience in preparing financial statements, small enterprises often have more proficiency in ensuring the accuracy of their financial records (Kashani & Mousavi Shiri, 2022).Therefore, financial statement integrity is not guaranteed by a tiny company's size, as shown by a modest market capitalization, book value, and earnings.However, financial reports may not be reflective of real financial situations, and a bigger corporation does not always mean a better degree of honesty when generating financial reports.Thus, it is safe to say that the reliability of financial statements is unaffected by the size of a firm. Conclusions The study identified various factors impacting the precision of financial statements among real estate and property firms listed on the Indonesia Stock Exchange (IDX) from 2018 to 2022.These factors comprise company size, attributes of intellectual capital, and the presence of independent commissioners.Incorporating the intellectual capital variable into financial audits of IDX-listed real estate and property companies can augment the comprehensiveness of the evaluation.However, introducing an independent commissioner variable may pose potential hazards to the accuracy of financial records for these firms, while firm size does not seem to influence the credibility of financial accounts.Hence, it is advisable for enterprises in the property and real estate sector to consider these findings when addressing potential challenges to the reliability of their financial statements.Recognizing the significance of intellectual capital is crucial, as independent commissioners wield considerable influence over financial accuracy.Investors and stakeholders relying on financial data should incorporate this research into their decisionmaking process.They should carefully evaluate variables such as intellectual capital and the presence of unbiased commissioners when making investment decisions. Moreover, educational resources and research on the effects of independent commissioners, intellectual capital, and firm size on financial statement accuracy should incorporate the study's insights.Expanding the scope of research to encompass other sectors and examining additional factors that may affect financial statement precision based on the study's findings is recommended.Employing alternative measurement methods can further enrich one's comprehension of these issues. Figure 1 . Figure 1.GDP Growth in the Property and Real Estate Subsector Listed on the IDX 2018-2022 Source: Central Statistics Agency (2023), Data processed by the author (2023) Table 2 . Results of Descriptive Statistical Analysis Table 3 . Multicollinearity Test Results Table 4 . Heteroscedasticity Test Results Table 5 . Chow Test Results Table 6 . Hausman Test Result Table 7 . Fixed Effect Model Test Results Table 8 . Coefficient of Determination Test Results
7,275.8
2024-03-31T00:00:00.000
[ "Business", "Economics" ]
Evaluation of Return Period and Occurrence Probability of the Maximum Magnitude Earthquakes in Iraq and Surroundings It has long been clear that earthquake prediction is important from both social and economic perspectives; therefore, the practical objective of today’s earthquake seismology researchers is an effective earthquake prediction program. The purpose of this study is to estimate earthquake probabilities and return periods using an updated earthquake catalogue (1900-2019) for Iraq and its surroundings. Weibull’s formula and inverse Weibull’s formula were employed to calculate the period of return and the occurrence probability of the maximum magnitude earthquake. The return period for earthquakes magnitudes 5 and 7Mw was 1.1 and 10.54 years, respectively, while the occurrence probability was 93.79% and 9.5%, respectively. The greatest magnitude is 7.7, with a 121-year return period and likelihood of approximately 0.82%. The probability of exceedance increased as the time period increased. The return duration was greater for earthquakes of higher magnitudes. Introduction Earthquakes that affect humans and their environments are among the worst types of natural catastrophes.It has long been clear that earthquake prediction is important from both social and economic perspectives; therefore, the practical objective of today's earthquake seismology researchers is an effective earthquake prediction program [1].The difficulty of earthquake prediction in seismology has long attracted the interest of both the scientific community and general public [2].Predicting earthquakes has been highly challenging for a long time [3].The difficulty in predicting earthquakes is due to several reasons: (1) it is extremely difficult to predict the time and size of seismic events because of the complex interactions that occur between tectonic plates, faults, and other geological factors [4]; (2) the lack of substantial and long-term data is a major obstacle to earthquake prediction; and (3) large earthquakes frequently occur at long intervals (hundreds to thousands of years) [5], making it challenging to detect trends and patterns over a long period of time [6].In addition, traditional prediction techniques based on empirical (physical or statistical) models frequently oversimplify and are flawed when used for real-world events [7].Earthquake prediction is one of the many scientific subjects that has benefited from the recent rapid progress in artificial intelligence (AI) [2] . There are three types of earthquake predicting models in seismology [8], the first is a statistical probability forecasting model based on the Gutenberg-Richter (GR) relationship [9].Physical prediction models, which are divided into two categories, constitute the second category.The first is predicated on the intricately observed space-time patterns of earthquake behaviour.The second is predicated on seismic quiescence that occurs before major quakes [10].The third type is a hybrid model that combines statistical probability forecasting models with physical earthquake prediction models [11], [12].Many investigations have been carried out to evolve reliable estimates of the likelihood, magnitude, and return period relationships in light of the widespread occurrence of earthquakes.When applying probabilistic seismic hazard assessment (PSHA) models to forecast earthquakes, there are specific requirements for the geographic data of the area and earthquakes [8].To reliably predict the occurrence of earthquakes in the future, a PSHA model is based on in-depth knowledge of the mechanism of earthquakes [13].The main benefit of PSHA is its ability to integrate all types of seismicity: time, space, and ground motion to produce a cumulative exceedance probability that considers the relative frequency of different quakes and ground motion features [14] . Although it is difficult to provide an exact date for a predicted earthquake, it is feasible to estimate its likelihood with a certain degree of inaccuracy.Probability distribution functions are important for estimating earthquake hazards [15].A method for applying a straightforward point process approach to various characteristics of recorded seismicity is the statistical modeling of earthquakes [16], [17].It is possible to forecast the long-term process of earthquake production at a specific location by using best-fit statistical models [18].To compute conditional probabilistic time-dependent seismic renewal models for future earthquakes, statistical distributions of earthquakes such as Gumbel, Gaussian, Lognormal, Gamma, and Weibull were used [19], [20].To predict future earthquakes and perform a probabilistic study of seismic hazards, it is crucial to choose a distribution model that best fits the data for a specific location [21]. In Iraq, Al-Abbasi and Fahmi (1985) used the earthquake catalogue for the period 1905-1982 to determine the earthquake maximum magnitude, return period, and occurrence probability using the Gumbel statistical distribution model [35].Ammer et al. (2004) used the earthquake catalogue for the period 1900-2000 to assess the maximum magnitude and recurrence periods of moderate and large earthquakes using the Gumbel statistical distribution model and the Gutenberg-Richter relation [36]. The purpose of this study is to estimate earthquake probabilities and return periods using an updated earthquake catalogue (1900-2019) for Iraq and surroundings . Seismicity of Iraq Iraq is located in the north eastern region of the Arabian Plate, close to the convergence of the Eurasian and Arabian plates.While the eastern and northern regions of Iraq, which lie near the convergence borders of the Arabian and Eurasian plates, are subject to significant seismic activity, other regions of the country, which are farther from the plate boundary, are only subject to weak seismic activity [37].The seismicity of Iraq has been studied by many researchers (for example, [38], [39], [35], [40], [41], [42], [43], [44]; [45], [46], [47], [48] [49], [50], [51].The general trend of the BTC-Zagros fold-thrust belt is closely related to the seismicity of Iraq [44]. Iraq is divided tectonically into three regions: the Bitlis-Zagros Fold and Thrust Belt, the Mesopotamia Foredeep, and the Inner (stable) Arabian platform (Figure 1) [52].The Mesopotamia Foredeep is divided into the Al-Jazira Plain and the Mesopotamia Plain.The Bitlis-Zagros Fold and Thrust Belt and the Mesopotamia Foredeep are classified as Outer (unstable) Arabian platform.Frequent earthquake activity is a feature of the Bitlis-Zagros Fold and Thrust Belt of the Alpine Orogeny [53], [54], [37]. The Lower Zab and Diyala River faults are two examples of active NE-SW trending (transverse) faults in the Bitlis-Zagros fold and thrust belt.Listric (longitudinal) faults running parallel to the fold axes are also active in this region [37].The Badra-Amarah fault, which runs along the Iraqi-Iranian border and is thought to be the most seismically active fault in Iraq, the Euphrates fault, the Hummar fault (north of Basra), the Al-Refaee fault, and the Kut fault are all seismically active faults in the Mesopotamian Foredeep [37].A recent study of the seismicity of the Western Desert and its surrounding areas, which represent an important part of the Inner (stable) Arabian Platform, showed that the region was exposed to earthquakes ranging in size from 2 to 3.5, during the period from 1900 to 2017 [55].The epicenters were grouped into five seismic zones.A causal association may exist between the seismic activity in the research region and zones of weakness and/or stress condensation at the fault junctions.While there are faults in the Inner Arabian Platform, they have undergone far less recent deformation and show less evidence of Quaternary activity [37].Local deformation contributes to the seismicity of a stable shelf [56].Based on the information gathered from the International Seismological Center (ISC) by [49], Figure 1 shows the seismicity map of Iraq for 1900-2019 . Methodology In the current study, extreme (maximum or minimum) value analysis was used to calculate the probability of occurrence and return period of maximum magnitude earthquakes.The statistical study of unusual events is the focus of extreme value theory, which is concerned with the statistical laws of the extreme values of a random variable.This method requires basic computations and can efficiently characterize the tail properties of the data.Extreme value theory is an essential tool in the study of natural catastrophes [8].Most severe event analyses focus on the yearly distribution of the lowest or largest values at a particular location [14].The periods of return and earthquake occurrence probability were calculated using Weibull's formula [57].It is a simple method and is still employed by the U.S. Geological Survey and other researchers.The average recurrence interval over a long period is represented by the return period of an earthquake, which is a statistical measurement [25].The procedures listed below were used to determine the return period: (1) acquiring information over time on the frequency of earthquakes of a certain magnitude in a particular region, (2) sorting the earthquakes by magnitude in decreasing order, and (3) calculating the return period of a specific magnitude using Weibull's formula : (1) Where T = return period (year) m= event rank (in reverse order) n = number of earthquakes in the earthquake catalog Using Weibull's formula, the yearly probability of exceeding each magnitude is computed as follows : (2) According to Equations 1 and 2, the occurrence probability (P) of an earthquake with a given magnitude is as follows : ( For example, an earthquake with a 20-year recurrence period would have an annual exceedance probability of 1/20, 0.05, or 5%.According to this, there is a 0.05 or 0.05% probability that an event with a magnitude ≤ 20-year event will occur in any particular year.Similarly, the likelihood of an event larger than the 50-year event occurring in any given year is 1/50 = 0.02, or 2%.Although these percentages are the same every year, an earthquake of this magnitude might occur the following year or be far more than it over a period of 50 years [14], [32] . The following formula can be used to determine the likelihood that an earthquake of a certain magnitude will occur at any time t [28] : Pt is the occurrence likelihood throughout the entire time period t, and P is the occurrence likelihood in any given year . Earthquakes Data The International Seismological Center (ISC) earthquake occurrence data were the datasets that were used.The data that were chosen pertain to earthquakes that occurred between latitude 28° and 38° N and longitude 38° and 49° East.The selected data extend from January 1, 1900, to December 31, 2019.The magnitude of the earthquakes ranges from 0.3 7.7Mw.The data for the earthquakes in Iraq and its surroundings, including the year of occurrence, earthquake number, and maximum and minimum magnitudes, are listed in the Appendix [49]. Results and Discussion The results of the calculations of the maximum magnitude (Mmax), rank (m), annual exceedance probability (P%), and period of return T for the earthquake data are listed in Table 1.The earthquakes were classified into several ranks based on their magnitude.The highest magnitude event takes the first rank (m1) and the subsequent event the second rank (m2), and the lowest magnitude event takes the last rank, equal to the number of years of the earthquake catalog . Return Period and Annual Exceedance Probability The results of the calculation of the return period of the annual maximum magnitude of the earthquakes are listed in Table 1.The relationship between the annual maximum magnitude and the return period is shown in Figure 3.The period of recurrence for every earthquake magnitude between 4.7 and 7.7 Mw can be calculated from Figure 2.For example, a magnitude 6 Mw earthquake has a return time of approximately 2.45 years, whereas a magnitude 7 Mw earthquake has a return period of approximately 10.54 years.The results of the current study on the return period of many earthquakes were compared with those of a previous study conducted by [35] using extreme value analysis.They used the earthquake catalogue for the period 1900-1982.They employed the Gumbel distribution model, while the Weibull distribution model was used in the current study.The obtained values of the return period in the current study were compared with those reported by [35] (Table 2) .The difference in the return period values of the earthquake calculated in the current study and in the previous study, especially for large magnitudes, may be due to the difference in the coverage period of the earthquake catalogs used in both studies.The record periods for the earthquake catalogs in the current and previous studies were 120 and 82 years, respectively.According to Weibull formula (equation 1) the record period affects the value of the return period for the same rank (m) of given magnitude.For example, an earthquake of seismic magnitude 7, which rank (m) 2, has a period of 41.5 years if the record period (n) equals to 82 years while the return period equals to 60.5 years if the record period (n) is equal to 120 years.Several studies [33], [8] should use different statistical distribution models (e.g., Gumbel, Weibull, Gamma, etc.) affect the calculation of the return period and the occurrence probability of an earthquake. The relationship between the yearly maximum magnitude of the earthquake and the yearly exceedance probability is shown in Figure 3.An event-related magnitude and recurrence interval can be derived from this figure.For example, the event magnitude related with the return period of 1.1 years is about 5 Mw.The 1-year earthquake is the name given to this event.The probability that an event this year or any other year will have a magnitude greater than that of the one-year earthquake is 93.79%.Other example, the earthquake magnitude associated with the return period of 10.54 years is about 7 Mw.The 10-year earthquake was the name assigned to this event.The probability that an event this year or any other year will have a magnitude greater than that of the 10-year earthquake is 9.5% .Figure 3. Earthquake magnitude and annual exceedance probability relationship . Probability during a period of time (Pt) The Pt value for the time period t (120 years) was calculated for earthquakes of magnitudes 6.5, 7, and 7.4 Mw, using formula 4. The results are shown in Figure 4.The Pt values at 30years for earthquakes of magnitudes 6.5Mw, 7Mw and 7.4Mw were 99.97%, 95%, and 58.39%, respectively.The earthquake probabilities and their magnitudes over a period of 30 years are shown in Figure 5.An earthquake of magnitude 6Mw for example, has a 99.99% probability of occurrence, whereas an earthquake of magnitude 7.7Mw has a probability of 21.89%.The calculated values show that, as the time period increases, the probability of exceedance increases.Additionally, it should be noted that the return period is longer for earthquakes of higher magnitudes.It is necessary to realize that the calculated probability of an earthquake occurring and its return period are statistical predictions derived from a collection of earthquake data for Iraq.Nobody actually knows when or where an earthquake of magnitude M will strike with a probability of 1% or higher [25]. Conclusions Weibull's formula and its inverse were used to calculate the probability of occurrence and return period, respectively.The return period for the earthquake's magnitudes of 5 and 7Mw was 1.1 and 10.54 years, respectively, while the occurrence probability was 93.79% and 9.5%, respectively.The largest magnitude is 7.7 with a period of return 121 years and likelihood of about 0.82%.The probability during a period of time (Pt) at 30-years for the earthquakes of magnitudes 6.5Mw, 7Mw and 7.4Mw are 99.97%,95% and 58.39% respectively.The probability of an exceedance increases as the time period lengthens.The return duration is greater for earthquakes with higher magnitudes . Figure 4 . Figure 4. Earthquake probability for earthquake magnitudes in a time span Figure 5 . Figure 5. Earthquake probability and their magnitudes in a time span of 30 years . Table 1 . Ranking, maximum magnitude, probability, and return period .Figure 2. The relationship between return period and earthquake magnitude. Table 2 . Comparison of the obtained return period values with the previous study results.
3,544.8
2024-02-01T00:00:00.000
[ "Geology", "Environmental Science" ]
Heterogeneous Photocatalysis as a Potent Tool for Organic Synthesis: Cross-Dehydrogenative C–C Coupling of N-Heterocycles with Ethers Employing TiO2/N-Hydroxyphthalimide System under Visible Light Despite the obvious advantages of heterogeneous photocatalysts (availability, stability, recyclability, the ease of separation from products and safety) their application in organic synthesis faces serious challenges: generally low efficiency and selectivity compared to homogeneous photocatalytic systems. The development of strategies for improving the catalytic properties of semiconductor materials is the key to their introduction into organic synthesis. In the present work, a hybrid photocatalytic system involving both heterogeneous catalyst (TiO2) and homogeneous organocatalyst (N-hydroxyphthalimide, NHPI) was proposed for the cross-dehydrogenative C–C coupling of electron-deficient N-heterocycles with ethers employing t-BuOOH as the terminal oxidant. It should be noted that each of the catalysts is completely ineffective when used separately under visible light in this transformation. The occurrence of visible light absorption upon the interaction of NHPI with the TiO2 surface and the generation of reactive phthalimide-N-oxyl (PINO) radicals upon irradiation with visible light are considered to be the main factors determining the high catalytic efficiency. The proposed method is suitable for the coupling of π-deficient pyridine, quinoline, pyrazine, and quinoxaline heteroarenes with various non-activated ethers. Introduction Heterogeneous photocatalysis in organic synthesis is a young and fast-growing area [1][2][3][4][5]. The semiconductor materials used in photocatalysis are inexpensive and widely available; their advantages include the ease of separation from organic products, stability and recyclability [1,5]. However, the development of this area is still hindered by several formidable obstacles, such as low catalytic efficiency due to the low degree of charge separation in photoexcited states and the fast recombination of electron-hole pairs [6,7], low visible light absorption and low selectivity due to the strong oxidation power of photogenerated valence-band (VB) holes in popular semiconductors (TiO 2 , ZnO, Bi 2 O 3 , WO 3 , etc.) [1,8]. This situation is reflected in the comparatively low number of synthetic methods in fine organic synthesis based on heterogeneous photocatalytic systems compared to the mainstream applications of heterogeneous photocatalysis: oxidative destruction of pollutants [9][10][11], hydrogen generation [12,13], CO 2 reduction [14][15][16] and water splitting [17]. UV irradiation, which is used frequently for the excitation of heterogeneous photocatalysts, is inconvenient due to safety issues, the comparatively high cost of UV light sources, incompatibility with common laboratory glassware (UV-transparent quartz is necessary) and possible side reactions due to the high energy of the light. The modification of heterogeneous photocatalysts, such as TiO2, in order to shift their photoactivity spectrum from UV to visible light [10,[34][35][36][37] is the key task for expanding the scope of their applications in organic synthesis, increasing selectivity and making the of use cheap and available light sources for catalyst activation possible. At present, the following modification approaches have been proposed: the immobilization of dyes (organic compounds or metal complexes) on the photocatalyst surface [34,[38][39][40][41], doping with metal ions or nonmetal elements [42,43], semiconductor coupling [7,[44][45][46][47][48][49] and modification with organic molecules bearing hydroxyl or carboxyl groups [34,[50][51][52][53][54][55][56], which demonstrate the occurrence of visible light absorption when adsorbed on the surface of a semiconductor. NHPI/TiO2 is one of the efficient catalytic systems activated by visible light based on industrially available substances (Scheme 1). The interaction of NHPI with the TiO2 surface leads to the occurrence of visible light absorption, resulting in the photogeneration of phthalimide-N-oxyl radicals (PINO) [20,22]. In our previous work [20], we demonstrated that the NHPI/TiO2 system could be successfully applied to the aerobic oxidation of alkylarenes under visible light irradiation (Scheme 1A). The conceptual novelty of this system arises from the conjunction of heterogeneous photocatalysis with homogeneous radical chain organocatalysis. A distinguishing feature of this system is the migration of PINO into the volume of solution, where the PINO/NHPI catalyzed radical chain process, once initiated on the TiO2 surface, produces the target product without the need for additional light absorption [20]. Thus, the energy efficiency of photocatalysis is fundamentally improved by combining heterogeneous photocatalysis with homogeneous organocatalysis. In the presence of additional organocatalyst (2,2,6,6-Tetramethylpiperidin-1-yl)oxyl (TEMPO) the effective oxidative homocoupling of benzylamines [22] [20], oxidative homocoupling of benzylamines (B) [22], and Minisi-type corss-dehydrogenative C-C coupling reported in the present work (C). Scheme 1. Applications of NHPI/TiO 2 photocatalytic system in organic synthesis: CH-oxygenation (A) [20], oxidative homocoupling of benzylamines (B) [22], and Minisi-type corss-dehydrogenative C-C coupling reported in the present work (C). In the present study, we demonstrate the successful application of the NHPI/TiO 2 system to a more challenging cross-dehydrogenative C-C coupling process (Scheme 1C). In this case, previously reported CH-oxygenation processes [20] should be suppressed, which is a difficult task. In addition, the process of C-O coupling between NHPI-derived PINO radicals and CH-reagents [57][58][59] must be avoided. The oxidative coupling of ethers with π-deficient N-heteroaromatic compounds (a Minisci-type reaction) was chosen as a model reaction due to the practical importance for the functionalization of N-containing heterocycles with C-C bond formation. Minisci-type reactions [60][61][62][63][64][65][66][67][68] are based on the addition of nucleophilic C-centered radicals to electron-deficient arenes and represent one of the most important methods for the functionalization of such arenes, along with the nucleophilic aromatic substitution of hydrogen [69][70][71], and functionalization via transitionmetal-catalyzed C(sp 2 )-H bond activation [72][73][74][75][76]. The products of the Minisci reaction are of great value for medicinal chemistry [61,64]. Thus, the development of new, milder, more efficient methods tolerant to a large number of functional groups based on Minisci chemistry remains a hot research topic. Optimization of Photocatalytic System Composition Based on our previous work [20], TiO 2 with high specific surface area (anatase nanopowder, Hombikat UV100) and industrially available N-hydroxyphthalimide were chosen as the components of the photochemical system. Blue LEDs (455 nm) with an input power of 10 W were used as light sources. In the first step, we optimized the conditions of the photochemical cross-dehydrogenative Minisci reaction between 4-methylquinoline 1a and tetrahydrofuran 2a (Table 1). Tert-butyl hydroperoxide (TBHP) was used as an inexpensive, easily available and metal-free oxidant. The starting conditions (10 mg of TiO 2 , 20 mol.% of NHPI, 4 mmol of TBHP, 5 h, run 1) yielded 45% of the product 3aa. The absence of either TiO 2 or NHPI resulted in the zero conversion of 1a (runs 2, 3), proving that both components of the catalytic system are essential. Without t-BuOOH, the reaction proceeded with low efficiency: only trace amounts of the product were formed (run 4). As a rule, the addition of a strong Brønsted acid, such as HCl [85] or TFA [77,79,82,84,86], increases the efficiency of the Minisci reaction. Acids protonate π-deficient N-containing heterocycles, making them more susceptible to attack by nucleophilic C-centered radicals [67]. However, in our case, the addition of trifluoroacetic acid (TFA, run 5) had no significant effect on the yield and conversion. The addition of 0.5 mL of water resulted in a drop in 3aa yield (run 6). Water breaks down the stable suspension of TiO 2 in THF, causing the catalyst particles to aggregate in the water droplets. Both an increase and a decrease in the amount of THF lead to a decrease in the yield of 3aa (runs 7,8). The dilution of the reaction mixture with such co-solvents as hexafluoroisopropanol (HFIP, run 9) and acetonitrile (MeCN, run 10) slowed down the reaction, and dilution with dichloroethane (DCE, run 11) led to the complete suppression of the target process. It is known that hydrogen peroxide can be used as the oxidant for the photocatalytic Minisci reaction [85]. However, the change of the oxidant from TBHP to aqueous H 2 O 2 led to a dramatic drop in the yield (run 12). The lower efficiency of H 2 O 2 compared to TBHP can be explained by the fact that H 2 O 2 can not only initiate free-radical reactions but can also be an inhibitor via the formation of HOO• radicals [92][93][94]. The use of other organic peroxides, such as meta-chloroperoxybenzoic acid (m-CPBA, run 13), cumene hydroperoxide (run 14) and dicumyl peroxide (run 15) led to low yields or did not provide the product at all. Dibenzoylperoxide (BzOOBz, run 16) showed a yield comparable to TBHP, but the formation of a large amount of benzoic acid, which is poorly soluble in the system, complicates the isolation of the products and limits the scalability of the procedure. Therefore, TBHP was chosen as the optimal oxidant. The standard version of the Minisci reaction often uses inorganic persulfates as oxidants. In our system, the use of persulfates was less efficient than TBHP, and led to a significant drop in yield with increasing reaction time, presumably due to the overoxidation of the product (runs [17][18][19][20]. An inert atmosphere did not increase the selectivity of the process (run 21), so we decided to carry out the reaction under air. Table 1. Influence of photocatalytic system composition, irradiation power, and nature of oxidant on the conversion of 4-methylquinoline 1a and yield of 3aa in photocatalytic Minisci reaction. Optimization of Photocatalytic System Composition Based on our previous work [20], TiO2 with high specific surface area (anatase nanopowder, Hombikat UV100) and industrially available N-hydroxyphthalimide were chosen as the components of the photochemical system. Blue LEDs (455 nm) with an input power of 10 W were used as light sources. In the first step, we optimized the conditions of the photochemical cross-dehydrogenative Minisci reaction between 4-methylquinoline 1a and tetrahydrofuran 2a (Table 1). Tert-butyl hydroperoxide (TBHP) was used as an inexpensive, easily available and metal-free oxidant. Table 1. Influence of photocatalytic system composition, irradiation power, and nature of oxidant on the conversion of 4-methylquinoline 1a and yield of 3aa in photocatalytic Minisci reaction. Run Changes Argon atmosphere 44 39 a The conversion of 1a and the yield of 3aa were determined by 1 H NMR using C 2 H 2 Cl 4 as an internal standard. b instead of TBHP. c 1 mL of water was used as co-solvent to dissolve the persulfate. In the next step, we optimized the NHPI/TiO 2 /TBHP ratio and irradiation time to achieve the maximum yield of the coupling product 3aa (Table 2). Increasing the amount of TiO 2 increases the yield of 3aa (runs 1-4). However, when switching from the TiO 2 loading of 20 mg to 40 mg, the efficiency increased only slightly. Therefore, the TiO 2 loading of 20 mg was chosen as the optimal amount. Similarly, large loadings of NHPI resulted in an increase in the 3aa yield (runs 5-8), but the step from 20 to 40 mol.% of NHPI increased the yield of 3aa slightly, and a slight drop in selectivity was observed. The optimum excess of THBP was 4 mmol per 1 mmol of 1a (runs 9-11). The reaction proceeded with almost complete conversion in 8 h (run 15). It should be noted that visible-light-active heterogeneous photocatalyst g-C 3 N 4 was ineffective for the model coupling reaction under the same conditions (run 16). The conditions of experiment 15 were chosen as optimal for further studies of the substrate scope for the developed method. a The conversion of 1a and the yield of 3aa were determined by 1 H NMR using C2H2Cl4 as an internal standard. b Bulk g-C3N4 (20 mg) was used instead of TiO2 as heterogeneous photocatalyst. Increasing the amount of TiO2 increases the yield of 3aa (runs 1-4). However, when switching from the TiO2 loading of 20 mg to 40 mg, the efficiency increased only slightly. Therefore, the TiO2 loading of 20 mg was chosen as the optimal amount. Similarly, large loadings of NHPI resulted in an increase in the 3aa yield (runs 5-8), but the step from 20 to 40 mol.% of NHPI increased the yield of 3aa slightly, and a slight drop in selectivity was observed. The optimum excess of THBP was 4 mmol per 1 mmol of 1a (runs 9-11). The reaction proceeded with almost complete conversion in 8 h (run 15). It should be noted that visible-light-active heterogeneous photocatalyst g-C3N4 was ineffective for the model coupling reaction under the same conditions (run 16). The conditions of experiment 15 were chosen as optimal for further studies of the substrate scope for the developed method. Application of the Designed Photocatalytic NHPI/TiO2 System to the Minisci Reaction With the optimal conditions in hand ( Table 2, run 15), we have synthesized a wide range of coupling products between N-heterocycles and ethers. The scope of ethers was explored first (Scheme 2). For substrates demonstrating lower conversions compared to 1a, the reaction time increased in some cases up to 48 h (the reaction times and conversions are given in Scheme 2). a The conversion of 1a and the yield of 3aa were determined by 1 H NMR using C 2 H 2 Cl 4 as an internal standard. b Bulk g-C 3 N 4 (20 mg) was used instead of TiO 2 as heterogeneous photocatalyst. Application of the Designed Photocatalytic NHPI/TiO 2 System to the Minisci Reaction With the optimal conditions in hand ( Table 2, run 15), we have synthesized a wide range of coupling products between N-heterocycles and ethers. The scope of ethers was explored first (Scheme 2). For substrates demonstrating lower conversions compared to 1a, the reaction time increased in some cases up to 48 h (the reaction times and conversions are given in Scheme 2). Among the tested ethers, we obtained the best result with THF: after 8 h of reaction, the almost complete conversion of 4-methylquinoline 1a and a high yield of product 3aa (89%) were observed. As a rule, the reaction proceeds more slowly and with lower selectivity for other ethers. In the reaction of 4-methylquinoline with 2-methyltetrahydrofuran 2b, a mixture of products 3ab (as a diastereomeric mixture, major) and 3ab' (minor) was observed. The observed regioselectivity can be explained by the fact that although the hydrogen atom abstraction is most favored from the weakest tertiary CH-bond (position 2 of 2-methyltetrahydrofuran) [95], the resulting C-centered radical is more stable and sterically hindered than the secondary radical and reacts less efficiently with 4-methylquinoline. For 1,3-dioxolane 2c, two isomeric products 3ac and 3ac' were formed, and the major product 3ac corresponds to the breaking of the weakest C2-H bond in 1,3-dioxolane. With dioxane and tetrahydropyran, the reaction proceeded more slowly, but with a longer reaction time, its selectivity decreased simultaneously with an increase in conversion. With glyme, the dehydrogenative coupling product was not observed even after 24 h of reaction. In the case of diethyl ether as a substrate, the reaction under the standard conditions was not effective due to the immiscibility of Et 2 O and H 2 O contained in TBHP (70% aq.), which led to the aggregation of TiO 2 particles in water droplets and the low conversion of 1a. The solution to the problem was the use of anhydrous TBHP, prepared before the reaction (See experimental details for Scheme 2). The same problem limited the reaction time for the coupling of 1a with Et 2 O since the water generated during TBHP reduction accumulated in the reaction mixture and made the TiO 2 suspension unstable. Among the tested ethers, we obtained the best result with THF: after 8 h of reaction, the almost complete conversion of 4-methylquinoline 1a and a high yield of product 3aa (89%) were observed. As a rule, the reaction proceeds more slowly and with lower selectivity for other ethers. In the reaction of 4-methylquinoline with 2-methyltetrahydrofuran 2b, a mixture of products 3ab (as a diastereomeric mixture, major) and 3ab' (minor) was observed. The observed regioselectivity can be explained by the fact that although the hydrogen atom abstraction is most favored from the weakest tertiary CH-bond (position 2 of 2-methyltetrahydrofuran) [95], the resulting C-centered radical is more stable and sterically hindered than the secondary radical and reacts less efficiently with 4methylquinoline. For 1,3-dioxolane 2c, two isomeric products 3ac and 3ac' were formed, and the major product 3ac corresponds to the breaking of the weakest C2-H bond in 1,3dioxolane. With dioxane and tetrahydropyran, the reaction proceeded more slowly, but with a longer reaction time, its selectivity decreased simultaneously with an increase in conversion. With glyme, the dehydrogenative coupling product was not observed even after 24 h of reaction. In the case of diethyl ether as a substrate, the reaction under the standard conditions was not effective due to the immiscibility of Et2O and H2O contained in TBHP (70% aq.), which led to the aggregation of TiO2 particles in water droplets and the low conversion of 1a. The solution to the problem was the use of anhydrous TBHP, prepared before the reaction (See experimental details for Scheme 2). The same problem limited the reaction time for the coupling of 1a with Et2O since the water generated during TBHP reduction accumulated in the reaction mixture and made the TiO2 suspension unstable. In the next step, the scope of the electron-deficient N-heterocycles was tested (Scheme 3). Scheme 2. Scope of ethers for the photocatalytic Minisci reaction with 4-methylquinoline 1a. In the next step, the scope of the electron-deficient N-heterocycles was tested (Scheme 3). N-heterocycles with electron-donor groups reacted slower compared to substrates with electron-withdrawing groups, but at the same time, higher selectivity was observed (products 3ba, 3ea in comparison with 3ca). The reaction is sensitive to steric hindrance: 2-chloro-5-bromoquinoline 2d did not yield the target product of 3da, presumably due to the presence of a bulky Br substituent near the 4th position of the quinoline. Our photochemical system is also applicable to quinoxalines and pyrazines. It is worth noting that the products of 3ga and 3ha have not been previously reported (See Supplementary Materials for additional information). In general, the reaction is inefficient for pyridines with no substituents or with electron-donor substituents (pyridine, picolines, lutidine), but good yields have been obtained for pyridines with electron-acceptor substituents, such as pyridine-3-carboxylic acid methyl ester (product 3ia). 4-Methylquinoline-N-oxide reacted with the preservation of the N-oxide function (product 3ja). Good yields have also been obtained in the reaction with isoquinoline (product 3ka). In the reaction with imidazo [1,2-a]pyridine 2l, it was only possible to isolate the product of deep oxidation with the destruction of the ring-3la'. It should also be noted that the addition of acid (TFA) afforded increased yields in some cases (products 3ba, 3ca, 3ea, 3ga, 3ha,3ja and 3ka). It turned out that carrying out the reaction to complete the conversion of π-deficient arenes in the NHPI/TiO 2 photochemical system leads to a sharp drop in selectivity for target product 3. We assumed that product 3 could undergo further oxidation under the reaction conditions. To find out what role the individual components of the system play in oxidation, we performed control experiments in which the pure reaction product 3aa was placed under standard reaction conditions or irradiated in an inert atmosphere in the absence of NHPI or TBHP (Scheme 4). N-heterocycles with electron-donor groups reacted slower compared to substrates with electron-withdrawing groups, but at the same time, higher selectivity was observed (products 3ba, 3ea in comparison with 3ca). The reaction is sensitive to steric hindrance: 2-chloro-5-bromoquinoline 2d did not yield the target product of 3da, presumably due to the presence of a bulky Br substituent near the 4th position of the quinoline. Our photochemical system is also applicable to quinoxalines and pyrazines. It is worth noting that the products of 3ga and 3ha have not been previously reported (See Supplementary Materials for additional information). In general, the reaction is inefficient for pyridines with no substituents or with electron-donor substituents (pyridine, picolines, lutidine), but good yields have been obtained for pyridines with electron-acceptor substituents, such as pyridine-3-carboxylic acid methyl ester (product 3ia). 4-Methylquinoline-N-oxide reacted with the preservation of the N-oxide function (product 3ja). Good yields have also been obtained in the reaction with isoquinoline (product 3ka). In the reaction with imidazo [1,2a]pyridine 2l, it was only possible to isolate the product of deep oxidation with the destruction of the ring-3la'. It should also be noted that the addition of acid (TFA) afforded increased yields in some cases (products 3ba, 3ca, 3ea, 3ga, 3ha,3ja and 3ka). It turned out that carrying out the reaction to complete the conversion of π-deficient arenes in the NHPI/TiO2 photochemical system leads to a sharp drop in selectivity for target product 3. We assumed that product 3 could undergo further oxidation under the reaction conditions. To find out what role the individual components of the system play Under the standard conditions, an 86% conversion of 3aa was observed in 8 h (Scheme 4, A). In the absence of TBHP under an air atmosphere, the product is also oxidized (88% conversion, Scheme 4, B), which suggests that a significant role in the decomposition of the product is played by air as an oxidant. The primary oxidation product was hydroperoxide 3aa', which was detected in a mixture of oxidation products by 13 C NMR and was confirmed by HRMS (See Supplementary Materials). The 13 C signal with chemical shift typical for geminal alkoxyhydroperoxide fragment was observed [96]. However, carrying out the reaction under an argon atmosphere (Scheme 4, C) does not completely suppress the oxidation of product 3aa since TBHP or residual amounts of oxygen can Under the standard conditions, an 86% conversion of 3aa was observed in 8 h (Scheme 4, A). In the absence of TBHP under an air atmosphere, the product is also oxidized (88% conversion, Scheme 4, B), which suggests that a significant role in the decomposition of the product is played by air as an oxidant. The primary oxidation product was hydroperoxide 3aa', which was detected in a mixture of oxidation products by 13 C NMR and was confirmed by HRMS (See Supplementary Materials). The 13 C signal with chemical shift typical for geminal alkoxyhydroperoxide fragment was observed [96]. However, carrying out the reaction under an argon atmosphere (Scheme 4, C) does not completely suppress the oxidation of product 3aa since TBHP or residual amounts of oxygen can serve as oxidants. The lowest conversion of the product was observed when the reaction was carried out in an argon atmosphere without the addition of NHPI (Scheme 4, D), implying that NHPI-derived PINO radicals play an important role in 3aa oxidation. Based on the collected data, we proposed the following mechanism (Scheme 5). Upon irradiation with visible light, PINO radicals are generated from NHPI on the TiO 2 surface. Simultaneously, the tert-butyl hydroperoxide decomposes on the TiO 2 surface with the formation of tert-butoxyl radicals. Tert-butoxyl radicals can regenerate PINO by abstracting a hydrogen atom from the NHPI in solution [59]. Tert-butoxyl radicals can also generate tertbutylperoxy radicals from t-BuOOH [97,98]. Either tert-butoxy, tert-butylperoxy [99][100][101], or PINO radicals [59,95] can abstract a hydrogen atom from the α-CH bond in ether to form C-centered radical A. However, considering the fact that no cross-dehydrogenative coupling was observed without the addition of NHPI, the main role in H-atom abstraction is assumed to be played by the PINO radicals. Then, radical A undergoes addition to a heteroarene with the formation of the intermediate radical B, which is further subjected to HAT with the retrieval of aromaticity. Experimental details for Table 2 4-methylquinoline 1a (1 mmol, 143. If needed, another 4 mmol of the reaction t-BuOOH was added, and the reaction mixture was irradiated for another 8 h. At the end of the required time, the reaction mixture was poured into 20 mL of water and extracted with 3×15 mL of CH 2 Cl 2 . The combined organic extracts were washed with 2 × 20 mL of NaHCO 3 saturated solution. The extracts were dried over MgSO 4, and the solvent was evaporated in a vacuum membrane pump. The residue was purified using column chromatography to afford products 3aa-3ka. For the reaction of 1a with Et 2 O, anhydrous t-BuOOH was prepared. t-BuOOH 70% aq. (12 mmol, 1545 mg) was extracted with CH 2 Cl 2 (10 mL). The organic layer was dried over MgSO 4 , and the solvent was rotary evaporated. The obtained anhydrous t-BuOOH was used instead of t-BuOOH 70% aq. For the longer reaction times, the new portion of anhydrous t-BuOOH (4 mmol, 360 mg) was added each 8 h. Conclusions In this work, a new visible-light active heterogeneous photocatalyst system based on industrially available and non-toxic TiO 2 and NHPI was proposed for the cross-dehydrogenative C-C coupling of electron-deficient N-heterocycles with ethers. In this photocatalytic system, phthalimide-N-oxyl radicals photogenerated on the surface of titanium oxide become active mediators of the reaction, which leads to 1) an increase in efficiency due to the homogeneous organocatalytic process in solution and 2) allows the selective cleavage of the weak CH bonds. We have proposed a new mild method for the generation of C-centered radicals from non-activated esters for the Minisci reaction. Despite the fact that acidic additives are frequently used in Minisci-type reactions, the addition of acid was not necessary in our procedure in the case of several substrates. Optimal conditions were chosen for the Minisci reaction between π-deficient pyridine, quinoline, pyrazine, and quinoxaline heteroarenes with non-activated ethers. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28030934/s1, copies of NMR spectra of the synthesized products, the comparison of the developed method with the literature procedure, the determination of the side products of the studied reaction.
5,860.4
2023-01-17T00:00:00.000
[ "Chemistry" ]
Response Analysis of the Free Field under Fault Movements A quasistatic simulation of highly nonlinear problems under fault movements was carried out using the EXPLICIT module of ABAQUS. Combined with the secondary development program of the software, the application of the strain softening Mohr–Coulomb model in the simulation was realized. Free field-fault systems were simulated with two types of fault types (normal and reverse faults), four fault dip angles (45°, 60°, 75°, and 90°), and two kinds of soil (sand and clay). Moreover, the rupture laws and sensitivities of the sand and clay were studied with different soil thicknesses and different fault dip angles in the free field. +e results show that the width of the ground zone with obvious deformation, which represents the point of the fault outcrop, the critical displacement of the fault, and the rupture characteristics of the overlying soil are closely related to the fault type and soil parameters. +e critical displacement of the reverse fault is larger than that of the normal fault. +e width of the ground zone with obvious deformation varies from 0.65 to 1.3 and does not exhibit a regular relationship with the type of soil. Compared with a normal fault, the rupture of a reverse fault is not prone to exposure at the surface. Introduction Understanding the failure mechanisms of the free field under fault movements can provide a reference to analyse complex structures, indicating that the free field is an important part of engineering design and research.e simulation of the free field is simple, and there are no additional structural interactions with the fault, making it a relatively simple case of fault movement.When a bedrock fault is covered with a certain depth of soil, the dip angle, the displacement of the fault, and the soil parameters can be determined.However, there are still two problems worthy of our attention: (1) whether the fault rupture zone can be exposed at the surface along a fault outcrop and (2) what the deformation of the ground is and what causes significant deformation of the ground caused by a rupture of the overlying soil.ese problems in practical applications of engineering and research are particularly important, and thus, many scholars' attention has been directed toward this issue in recent years. At present, many scholars have studied the rupture laws of the overlying soil due to fault movements, and most studies adopted numerical simulation methods [1][2][3][4][5].Previous studies have found that when the thickness of the overlying soil reaches 30 m∼50 m or even up to 75 m, the vertical displacement of the fault will reach 3%∼5% or 7% of the soil thickness, respectively, and the overlying soil layer will rupture.When the thickness of the soil layer is more than 100 m, it is difficult for the overlying soil to rupture.If the overlying soil layer on a reverse fault is in a compressive stress state or if the shear modulus of the soil is high, the soil layer can easily rupture.Before the fault rupture zone develops at the ground surface, the propagation of seismic waves can rupture the ground surface [6].When the soil layer contains weak layers, it can mitigate the effects of the fault movement on the overlying soil.If the fault displacements are the same, the destruction of a reverse fault will be the largest, and the rupture zone of the overlying soil will finally tilt toward the footwall.e destruction caused by a normal fault would be the second largest, and a strike-slip fault is the least destructive [7].For the case of overlying soil failure due to vertical movement, as the dip angle becomes larger, a greater vertical displacement is required for the surface to rupture.Due to the influences of inertia forces, the loading rate of the fault movement has a certain effect on the displacement required for the ground to rupture and for the fracture angle of the soil layer.e fault type also has a great in uence on the relationship between the ground surface fracture angle and the dilation angle and between the ground surface fracture angle and friction angle [8].When the overlying soil thickness is the same, as the fault displacement changes, the ground rupture zone will be di erent.However, the permanent deformation characteristics of the overlying soil are almost the same.If the shear wave velocity is low, the ground rupture zone will be wide.With an increase in the soil thickness, the fracture zone also widens.Compared with a clay layer, the e ect of a sand layer is very small [9].Above all, most previous numerical analyses were concerned with the free eld-fault system and were focused on studying the fracture characteristics of the overlying soil.However, few studies have been conducted on obvious ground deformations or the sensitivity of the overlaying soil deformation and rupture with di erent faults and soils.To further study the response characteristics of the free eld under fault movements and understand the in uences of fault movements on the overlaying soil, this paper comprehensively considered four key factors through numerical simulations, including the quasistatic analysis, mesh size, strain softening, and material damping.e ABAQUS/EXPLICIT software was used for the simulations and calculations; the simulations used the Mohr-Coulomb yield criterion in consideration of the properties of material strain softening.In addition, this study focused on two questions.e rst is 2 Advances in Materials Science and Engineering whether the fault rupture surface can be exposed at the ground surface and the outcrop position, and the second addresses the ground deformation mode and the zone of obvious ground deformation due to overlying soil fractures.erefore, the fracture deformation characteristics of the overlying free eld under a reverse fault and a normal fault can be obtained. Numerical Simulation Methods and Conditions 2.1.Finite Element Model and Method.Combined with the quasistatic method, the numerical simulation of the EX-PLICIT module in the nite element software ABAQUS is used for this numerical simulation.e quasistatic method has been widely used in previous studies to solve problems of fault movement and propagation [10]. is method can re ect the dynamic characteristics of loading to a limited extent and indicate the response of the overlying soil under fault movements.Since the problem involves relatively large displacements, the e ect of large deformation is considered to obtain improved simulation results. e explicit dynamics analysis procedure is based on the implementation of an explicit integration rule together with the use of diagonal or "lumped" element mass matrices.e equations of motion for the body are integrated using the explicit central di erence integration rule: Advances in Materials Science and Engineering where _ u is the velocity, € u is the acceleration, i is the increment number, and i ± (1/2) is the midincrement value. A small amount of damping is introduced to control high frequency oscillations.With damping, the stable time increment is given by where ξ is the fraction of critical damping in the highest mode.e explicit integration rule is simple but cannot provide the computational e ciency associated with the explicit dynamics procedure.e explicit procedure requires no iterations and no tangent sti ness matrix.A special treatment of the mean velocities, including _ u (i+(1/2)) and _ u (i−(1/2)) , is required for the initial conditions as well as for certain constraints and presenting the results.For the presentation of the results, the state velocities are stored as a linear interpolation of the mean velocities: e central di erence operator is not self-starting because the value of the mean velocity _ u (−(1/2)) needs to be de ned.e initial values (at time t 0) of the velocity and acceleration are set to zero unless otherwise speci ed by the user.erefore, the condition is set as follows: Substituting (4) into the updated expression for _ u (i+(1/2)) yields the following de nition of _ u (−(1/2)) : e selected soil constitutive model is the Mohr-Coulomb model, which can realize the numerical simulation of most geotechnical engineering problems and achieve good simulation results.To consider the strain softening characteristics, this paper utilizes the user subroutine USDFLD (VUSDFLD) to supplement the Mohr-Coulomb model.e main purpose is to change the eld variables at the material points to alter the properties of the material.e following interface programs can be customized: x/H x/H Sand-β = 75°d /H = 0.5%, 1%, 2%, 3%, 4%, 5% DIMENSION ARRAY (15), JARRAY (15) DIMENSION FIELD(NFIELD), STATEV(NSTATV), DIRECT(3,3), T(3,3), TIME (2) DIMENSION ARRAY (15), JARRAY (15), JMAC( * ), JMATYP( * ), COORD( * ) user coding to de ne FIELD and, if necessary, STATEV and PNEWDT RETURN END e vertical movement velocity of the fault is calculated and compared with v 1 m/s, 0.5 m/s, 0.1 m/s, and 0.05 m/s.e ground vertical displacement curves and ground tilt curves are shown in Figure 1. According to Figure 1, the high sliding velocity of the fault will concentrate the strain in the lower part of the overlying soil, while the peak value of the ground tilt is smaller than the conditions with a relatively low velocity.However, when the velocity decreases to 0.1 m/s and continues to be reduced, the plastic strain and ground displacement of the soil will no longer show clear changes.Instead, the time required for the calculation will increase. erefore, the fault sliding velocity used in this numerical x/H -2.0 -1.5 -1.0 -0.5 0.0 0.5 x/H -2.0 -1.5 -1.0 -0.5 0.0 0.5 x/H -2.0 -1.5 -1.0 -0.5 0.0 0.5 x/H -2.0 -1.5 -1.0 -0.5 0.0 0.5 6 Advances in Materials Science and Engineering simulation is 0.1 m/s, and the quasistatic method is selected for the simulation.e ground vertical displacement curves and ground tilt curves with di erent grid sizes are shown in Figure 2.For the nite element calculation, the grid division directly a ects the accuracy of the calculation.e simulation compares the conditions where the size of the grid in the middle of the model is L 0.1H, L 0.075H, L 0.05H, and L 0.025H, where H is the depth of the soil.As seen from Figure 2, the ground deformation is di erent due to the di erent grid sizes.With a decrease in the grid size, the maximum inclination of the ground increases gradually.In addition, the calculation results may be small in the large grid and will negatively impact the understanding of the rupture characteristics of the overlying soil.erefore, the grid size of L 0.025H is adopted in this model. e overlying soil will produce a relatively large deformation and strain, and the reduction of the soil strength should be considered at this point.If the numerical simulation does not consider the impact of this factor, the results may be di erent from those in practical situations.In addition, previous scholars have made a good comparison of this result [11][12][13]. Strain softening and nonstrain softening are both carried out in this simulation.When considering strain softening, the residual friction angle and residual dilation angle are φ res 25 °and ψ res 0 °, respectively, and the critical value of the plastic shear strain is e 0 0.1.e plastic strain zones of the two conditions both pass through the overlying soil layer and have uniform distributions.However, when strain softening is not considered, the angle between the rupture zone and the horizontal plane is obviously smaller, and the maximum inclination angle of the ground is lower [14,15].e ground displacement in consideration of strain softening is shown in Figure 3. It can be concluded from the comparison of the plastic strain zone and ground deformation under the two conditions that considering strain softening for this model is necessary to fully estimate the e ects of fault movements on the overlying soil. For the quasistatic problem, the method can greatly reduce the in uence of dynamic waves on the model, but the damping of the material cannot be ignored.In view of this, Rayleigh damping is considered in the model [16], and numerical simulations both with and without material damping are performed.When considering material damping, the plastic Clay-β = 45°d /H = 0.5%, 1%, 2%, 3%, 4%, 5% Clay-β = 60°d /H = 0.5%, 1%, 2%, 3%, 4%, 5% x/H Clay-β = 75°d /H = 0.5%, 1%, 2%, 3%, 4%, 5% x/H Advances in Materials Science and Engineering strain zone breaks through the overlaying soil layer with a uniform distribution.However, when material damping is not considered, not only is the overlying soil layer penetrated by the rupture zone but also a local plastic strain zone appears on the left ground side of the fracture zone.In addition, the rupture of the overlying soil is irregular [17].Similarly, results from considering the material damping of the ground displacement are shown in Figure 4. From Figure 4, it can be seen that an upward sharp angle appears on the left side of the vertical displacement curve related to the region of local plastic strain concentration.e ground tilt value uctuates when material damping is not considered.erefore, although the quasistatic method is used for the numerical simulation, the damping of the material is still considered. In this model, the overlying soil is a homogeneous single body.e fault plane is at and penetrates the bedrock to reach the lower part of the soil.erefore, all of the attention can be focused on the internal response of the overlying soil due to the fault movements.e structure and parameters of the nite element model are shown in Figure 5.When the fault fracture zone appears at the ground surface, the vertical displacement of the fault is the critical fault displacement, which is recorded as d 0 /H.S/H and P/H are expressed as the corresponding values for the completion of the fault displacement, namely, d/H 5%. e Model Grid and Boundary Conditions. e model grid and boundary conditions with a 20 m deep overlying soil layer are shown in Figure 6. e bottom boundary of the model is the interface between the overlying soil layer and the fault assuming that the ground and the interface between the soil and bedrock are both horizontal.To minimize the boundary impact of plastic strain accumulation on the central fault grid, the model width should be approximately 3 to 4 times larger than the height, and thus, the simulation adopts a factor of 4. e central area of the fault should be approximately 1.5 to 2 times larger than the height, and thus, a factor of 2 is used here.e model is composed of plane strain quadrilateral elements.e grid should be as re ned and regular as possible to ensure the accuracy of the calculation.erefore, the grid size of the model's central zone is 2.5% of the model depth H, while the percentage gradually increases from 2.5% to 5% from the middle to the outside of the other parts. e right side of the footwall is established with a xed constraint in the horizontal direction, and both sides of the boundary are unconstrained in the vertical direction.e contact between the overlying soil layer and the bedrock is considered to be fully complete.In addition, since this hypothesis is more reasonable than establishing a rough interface between the rock and soil layers, relative slip at the interface can be completely avoided. In this case, p 0 for h < 0 (open) and h 0 for p > 0 (closed), where p is the contact pressure between two surfaces at a point and h is the interpenetration of the surfaces.e contact constraint is enforced with a Lagrange multiplier representing the contact pressure in a mixed formulation.e virtual work contribution is and the linearized form of the contribution is 3. Analysis of the Displacement Deformation and Parameters of the Overlying Soil Advances in Materials Science and Engineering this characteristic.e elastic modulus E is a function of the depth H according to E E 0 H 1/2 , where the elastic modulus of the dry sand is E 0 20 MPa and that of the clay is E 0 5 MPa.e parameters of the two types of soil are shown in Table 1.e thickness of the overlying soil is 20 m in the main research.To study the in uence of the thickness on the response of the overlying soil, H 5 m, H 10 m, and H 40 m are also considered. Displacement and Deformation Analysis. If the numerical parameters are in accordance with the established model, the simulation results are in good agreement with the actual investigations of fault movements [18].When the overlying soil thickness is 20 m, the resulting ground deformation curves with di erent fault displacements and di erent types of overlying soil (dry sand and clay) are shown in Figures 7-10.Two nondimensional parameters are adopted here, namely, y/H and x/H.From Figures 7-10, it can be seen that the critical displacement of the reverse fault is larger than that of the normal fault.For the normal fault at 45 °, the ground deformations of the sand and clay both show a concave shape. For clay, the width of the ground e ect caused by the phenomenon is larger, but the concave shape is not as obvious. is may be due to the presence of a second fracture zone that is nearly symmetrical across the middle of the model, perpendicular to the main fracture zone in the overlying soil.In addition, this phenomenon is eliminated with an increase in the fault dip angle.It can be concluded that when the dip angle of the fault is small, a second fracture zone may appear in the overlying soil in the free eld.erefore, when the dip angle of the fault is small in an actual engineering project, it is necessary to perform a comprehensive analysis according to the site conditions and have an exhaustive understanding of the various situations that may be caused by fault movements.us, a small dip angle of the fault is closely related to di erent parameters of the overlying soil; that is, the extent of the dip angle that can cause a second fracture zone depends on the soil parameters. e ground incline curves are shown in Figures 11-14, and the curves of the maximum absolute inclination angle of the ground (dy/dx) are shown in Figure 15. As indicated in Figure 15, the maximum inclination of the ground corresponding to sand is larger than that corresponding to clay regardless of the fault dip angle [19].In the Sand-β = 105°d /H = 0.5%, 1%, 2%, 3%, 4%, 5% x/H -2.0 -1.5 -1.0 -0.5 0.0 0.5 x/H -2.0 -1.5 -1.0 -0.5 0.0 0.5 10 Advances in Materials Science and Engineering reverse fault, the internal compression of the sand is smaller than that of the clay.erefore, the response of the overlying soil to the surface deformation is greater due to the fault.However, in the normal fault, the uneasy, tensile clay can reduce the soil stretching and shearing caused by the fault due to its cohesion and tensile strength.erefore, the greater the strength of the overlying soil, the more severe the response will be with the fault, and this violent reaction will be accompanied by a greater ground deformation. Parameter Analysis. With an increase in the fault movement, the maximum inclination of the ground increases gradually; in addition, the increase in the speed is slow at the beginning, for example, when d/H 0.5%.When the fracture zone reaches the ground, the increase in the speed of the maximum inclination becomes more rapid.e relationship curves between d 0 /H and the fault dip angle with the di erent soils are shown in Figure 16.e following conclusions can be drawn from these curves.First, under a normal fault (β < 90 °), d 0 /H increases with an increase in the fault dip angle β for the di erent soils with values distributed between 0.2% and 0.8%.Second, under a reverse fault (β > 90 °), the value of d 0 /H increases and reaches 1.2% when β 135 °.Obviously, the critical displacement of the fault d 0 /H increases with an increase in the fault dip angle β, and the critical displacement of the reverse fault is larger than that of the normal fault.In general, the d 0 /H value of clay is larger than that of sand, and clay requires a larger value of d 0 /H than sand to make the fracture zone appear on the ground.ere are two approaches to determine the location of the fault outcrop.One approach is to nd the intersection between the main rupture zone and the ground, and the other is the maximum inclination point of the ground.e shortest horizontal distances from the above two points to the fault are expressed as S and P, respectively.S/H and P/H have the same laws; that is, both of them decrease gradually with an increase in the fault dip angle for the normal fault, and they have an increasing tendency with an increase in the fault for the reverse fault (where the fault dip angle β 180 °− ß reverse fault ).In addition, under a normal fault, the S/H and P/H values of sand are less than those of clay, but they are larger than those of clay under a reverse fault. e outcrop point of the fault and the intersection between the fault and soil layer are connected Advances in Materials Science and Engineering by segments, and the angles between each segment and the horizontal plane (referred to as the "fracture zone inclination") are shown in Table 2. From Table 2, it can be seen that the fracture zone inclination of sand is higher than that of clay in a normal fault, while the opposite is true in a reverse fault.12 Advances in Materials Science and Engineering e relationship between L/H and the relative displacement d/H of the lower bedrock is illustrated in Figure 17.It can be seen that the L/H value does not exceed the given critical value on the ground before the movement of the bedrock, after which L/H increases at a rapid rate.However, the change rate of L/H decreases obviously and tends to be stable when L/H reaches 1%.It can be seen from the trend line that L/H is directly proportional to the fault dip angle and that di erent soils correspond to di erent values of L/H.Moreover, when the corresponding d 0 /H value is higher, the value of L/H is higher.is trend illustrates that the magnitude of L/H is closely related to the accumulated deformation in the soil before the fracture zone reaches the ground; this explains why the L/H value of clay is larger than that of sand.erefore, the value of L/H in the reverse fault is larger than that in the normal fault.is is probably due to the e ect of d 0 /H, but the di erence in L/H between the reverse and normal faults is smaller than d 0 /H because the change in L/H is momentarily halted when d/H is larger than 1%. To illustrate the in uences of di erent soil depths H on the various parameters, H is set to 5 m, 10 m, 20 m, and 40 m for the calculations. e relationships between P/H, S/H, L/H, and d 0 /H and the depth H are shown in Figure 18. From Figure 18, it can be seen that the performance characteristics are the same regardless of the soil type.Namely, the P/H and S/H values are nearly uncorrelated with the depth H and have some independence.e L/H and d 0 /H values increase with an increase in the depth H.With variations in the depth H, the changes in P/H and S/H are not large and tend to be smooth.erefore, this method can be used to estimate the location of the fault outcrop point and provide convenience for engineering design and research purposes.e L/H and d 0 /H values increase gradually with a change in the depth; that is, the obvious surface deformation zone increases and the critical displacement required for the fault outcrop increases. Conclusions e nite element software ABAQUS is utilized to simulate the free eld-fault system and to study the fracture mode, ground displacement, and deformation characteristics of the overlying soil in the free eld.Advances in Materials Science and Engineering (1) e direction of a fault in the fracture zone of the overlying soil may de ect or bend with respect to the fault dip.When the dip angle of the normal fault is less than or equal to 45 °, a second fracture zone may appear in the overlying soil that may lead to a downward movement of a block of the triangular fracture, which is similar to Coulomb's Earth pressure theory of soil mechanics.However, this phenomenon will be eliminated with an increase in the fault dip angle.(2) Compared with a normal fault, the fracture zone in a reverse fault cannot easily outcrop at the ground surface; that is, a larger critical fault displacement is required before the outcrop is achieved.Moreover, with an increase in the fault dip angle, the critical fault displacement increases gradually, and the width of the obvious surface deformation zone at the ground surface also increases gradually from the normal fault at 45 °to the reverse fault at 45 °(β 135 °).(3) Under the conditions with the same fault dip angle and displacement, the dynamic response of sand is greater than that of clay.In addition, sand exhibits a larger deformation at the ground surface in close relation to the soil strength.Sand is a compressive and nontensile material, and the compression of sand is less than that of clay due to its high strength.erefore, the response of the overlying soil surface to the movement of the fault is greater, and the fracture zone dip angle of the sand is smaller than that of clay.In a normal fault, the cohesion and a portion of the tensile strength of clay can reduce the effects of the extension and shearing coincident with the fault movements.erefore, the fracture zone dip angle of sand is larger than that of clay.(4) e width of the obvious deformation zone at the ground surface and the critical displacement of the fault are directly proportional to the depth of the overlying soil.However, the proportional relationship between the horizontal distance of the two outcrop points to the fault and the soil depth H is not obvious. Figure 1 : Figure 1: e displacement of the ground surface: (a) vertical displacement and (b) tilt displacement. Figure 2 : Figure 2: e displacement of the ground surface: (a) vertical displacement and (b) tilt displacement. Figure 4 :Figure 3 : Figure 4: e displacement of the ground surface: (a) vertical displacement and (b) tilt displacement. width of the obvious deformation zone at the surface P: e horizontal distance from the maximum inclination point on the ground to the in bedrock S: e horizontal distance from the midpoint of the shear zone to the model's perpendicular bisector H: e thickness of the overlying soil β: e fault dip angle d: e vertical displacement of the fault perpendicular bisector of the fault model Figure 5 :Figure 6 : Figure 5: Schematic diagram of the model structure and parameters.L is the width of the obvious deformation zone at the surface, P is the horizontal distance from the maximum inclination point on the ground to the perpendicular bisector of the fault model, S is the horizontal distance from the midpoint of the shear zone to the model's perpendicular bisector, H is the thickness of the overlying soil, B is the fault dip angle, and d is the vertical displacement of the fault. Figure 16 : Figure 16: e relationship curves between d 0 /H and the fault dip angle. Figure 17 : Figure 17: e relationship curves between L and d with di erent fault dip angles for (a) sand and (b) clay.
6,412
2018-03-27T00:00:00.000
[ "Geology" ]
Thermodynamics, dielectric permittivity and phase diagrams of the Rb 1 (cid:0) x (NH 4 ) x H 2 PO 4 type proton glasses The cluster pseudospin model of proton glasses, which takes into account the energy levels of protons around the PO 4 group, the long-range interactions between the hydrogen bonds, and an internal random defor-mational (cid:2)eld is used to investigate thermodynamical characteristics, longitudinal and transverse dielectric permittivities of Rb 1 (cid:0) x (ND 4 ) x D 2 PO 4 and Rb 1 (cid:0) x (NH 4 ) x H 2 AsO 4 compounds. A review of experimental and theoretical works on the Rb 1 (cid:0) x (NH 4 ) x H 2 PO 4 type crystals is presented. Experimental studies of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds The hydrogen bonded compounds of the Rb 1−x (NH 4 ) x H 2 PO 4 type, which at certain compositions have a proton glass phase, have been intensively studied for more than 25 years. In order to describe possible proton configurations in the mixed Rb 1−x (NH 4 ) x H 2 PO 4 type compounds, let us consider first the structure of the pure RDP-RbH 2 PO 4 and ADP-NH 4 H 2 PO 4 crystals. In figure 1 a unit cell of the KDP-KH 2 PO 4 crystal, which is isomorphic to RDP, is shown. A primitive cell of the RbH 2 PO 4 type compounds contains one PO 4 tetrahedron of the "A" type and one PO 4 tetrahedron of the "B" type, two Rb atoms and four protons on four hydrogen bonds attached to the "A" type tetrahedron. In the ferroelectric phase the net dipole moment of the primitive cell, associated with displacements of heavy ions and deformations of the PO 4 groups, is directed along the c axis. A triggering mechanism of the ionic displacements in these crystals is the proton ordering (their positions are described by pseudospin operators S f = ±1, f = 1, 2, 3, 4) in double d ln τ · g(τ, T ) 1 − i2πντ , the distribution function of relaxation times g(τ, T ) was analyzed. In the time range τ ≈ [τ 0 , τ c ] the function g(τ, T ) was qualitatively approximated by a rectangular distribution with the critical relaxation time τ c . The best fit to the experimental data for x = 0.35 was obtained using the Vogel-Fulcher law τ c = τ 0 exp E c T − T 0 , T 0 = 8.74 K, E c = 268 K, ν 0 = 1/2πτ 0 = 3.49 · 10 12 Hz. At T = T 0 the maximal relaxation time becomes infinite. In [10] using the measured dielectric permittivities of Rb 0.5 (ND 4 ) 0.5 D 2 PO 4 the value of T 0 ≈ 32 K was obtained. In [42] it has been shown that for Rb 0.53 (ND 4 ) 0.47 D 2 PO 4 the spectrum of the distribution function g(τ, T ) consists of two wide lines; with decreasing temperature from 55 K down to 35 K a fast intensity redistribution from smaller times to larger ones takes place. These results are interpreted within a model of dynamically correlated domains [43,44], which form a system of classical dipoles. At the freezing temperature, part of them form an infinite percolation cluster. In this model T 0 = 0 K (the Arrhenius law). At low temperatures an essential role is, most likely, played by proton tunneling. This is indicated by the maximum on the temperature curve of the dielectric losses tangent in Rb 0.25 (NH 4 ) 0.75 H 2 PO 4 [8] at T ≈ 0.2 K, as well as by splitting of NMR spectral lines of Rb 0.56 (ND 4 ) 0.44 D 2 PO 4 [45]. This means that deuteron motion is not completely frozen out. Tunneling lowers down T 0 . Polarization relaxation and non-ergodic processes in proton glasses M 1−x (NW 4 ) x W 2 AO 4 (M = Rb, K; W = H, D; A = P, As) were explored by the Monte-Carlo method in [46]. The following interactions were taken into account: 1) between protons in the "upper" or "lower", lateral (W 2 AO 4 ), and Takagi (WAO 4 and W 3 AO 4 ) configurations; 2) between protons via NH 4 ions, which in pure ammonium compounds render the state with lateral configurations the ground state; 3) proton-lattice interactions, arising as a displacement field, if one of the nearest neighbors is the alkali ion, whereas the other is the ammonium ion; 4) interactions with an external electric field. At a given temperature the average value of polarization was calculated; the total number of proton jumps was up to 10 7 for each temperature. The temperature variation of polarization at heating in zero external field (P ZFH with the initial value of P ZFH (T = 0) = P i ) and at heating in non-zero field (P FH with the initial P FH (T = 0) = 0) was approximated by the following dependences ) . At small fields T e T Slater 0.53, γ = 6, where the non-ergodicity temperature T e is introduced. At small fields, when the temperature is raised to T Slater 0.38 the relations P ZFH P i , P FH 0 hold, that is, at low temperatures the system is in the non-ergodic state. Little attention has been paid to the investigation of a temperature dependence of specific heat of these systems in the glass phase region. We have come across a single paper [47], where it has been shown that the molar specific heat C(T) of Rb 1−x (NH 4 ) x H 2 PO 4 at x = 0.7 and x = 0.74 increases monotonously with temperature. Near 60 K the curve C(T ) is somewhat convex upwards. This convexity is most likely related to the protonic contribution to the specific heat, which is difficult to separate from the lattice contribution. The ferroelectric phase composition region In this region at high temperatures the q EA parameter obtained from the NQR linewidths in Rb 1−x (NH 4 ) x H 2 AsO 4 with x = 0.01, 0.02 [48] and NMR linewidths in Rb 1−x (ND 4 ) x D 2 PO 4 [39] with x = 0.22 is different from zero. This indicates a partial proton freezing at high temperatures. With lowering temperature, the transition to the ferroelectric phase takes place at T c (x); in this phase a spontaneous polarization P s exists. Unfortunately, the experimental data for P s and q EA are very limited, except for the case of x = 0. At x = 0, P s has a jump at T c (x). The temperature T c (x) is maximal at x = 0 and decreases with increasing x, whereas the jump in P s disappears (as observed in Rb 1−x (NH 4 ) x H 2 AsO 4 at x = 0.08 [18]), and the phase transition is smeared out. The temperature T c (x) can be also determined from the NMR data. Thus, in [49] using the NMR method it has been established that the temperature dependence of the spin-lattice relaxation time of 87 Rb ions in Rb 1−x (ND 4 ) x D 2 PO 4 has a minimum at T c (x). The transverse dielectric permittivity ε 11 (T, ν) of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds in the ferroelectric phase composition region is somewhat smaller than in the glass phase region. It gradually increases at lowering temperature, then has a rounded maximum at T c (x), and rapidly decreases to a certain constant value below T c (x). At even lower temperature T g (x) (inflection point), the permittivity ε 11 (T, ν) decreases to a minimal value. At the same time ε 11 (T, ν) has two maxima at T c (x) and T g (x). The same behavior was experimentally detected also for ε 11 (T, ν) and [16,18,51,52], and The longitudinal permittivity ε 33 (T, ν) of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds in the ferroelectric phase composition region also has a rounded peak at T c (x), but its height is by two orders of magnitude larger than that of ε 11 (T, ν) and larger than in the glass phase composition region. It becomes larger and sharper with lowering x. Such a behavior of ε 33 (T, ν) was observed in [22,23,54,55]. In samples with smaller x the transition to the ferroelectric phase takes place at higher temperatures than in samples with higher x. Smearing of the transition to the ferroelectric phase is associated with fluctuations of ammonium concentration. Such an explanation is confirmed by the data of [56], where in the neutron diffraction patterns of Rb 0.9 (ND 4 ) 0.1 D 2 AsO 4 the intensity maxima characteristic of the paraelectric and of the ferroelectric phase were shown to coexist in a certain temperature range (7-10 K). This fact indicates coexistence of the two phases. The presence of low-temperature peaks of ε 11 (T, ν) and ε 33 (T, ν) at T g (x) in the ferroelectric phase composition region is related to coexistence of the ferroelectric and glass phases. Such a coexistence was revealed by measurements of ε 11 (T, ν) in Rb 1−x (NH 4 ) x H 2 PO 4 at x = 0.15 and 0.17 [50], Rb 1−x (NH 4 ) x H 2 AsO 4 [16,18,51,57] [53,58]. It is believed that in the ferroelectric phase composition region, the samples have small inclusions, in which the concentration of N H 4 is characteristic of the glass phase composition region. These inclusions at the temperature T g (x) undergo a transition to the proton glass state. With lowering x the temperature T g (x) decreases. This is associated with a decrease of the dimensions and correlation length of the clusters, where the transition to the glass state takes place; as a result, at low x the system dynamics is faster than at x close to the glass phase composition region. In [57] the imaginary part of the permittivity ε 11 (T, ν) and the Cole-Cole curves were measured at different frequencies for low concentrations x = 0; 0.01; 0.05; 0.1 in Rb 1−x (NH 4 ) x H 2 AsO 4 and Rb 1−x (ND 4 ) x D 2 AsO 4 . At x = 0.05; 0.1 a coexistence of the low-temperature proton glass phase and non-uniform ferroelectric phase has been detected below T g (ν, x). From the Cole-Cole curves a presence of the relaxation time distribution below T g (ν, x) is evident. In [18] the temperature dependences of spontaneous polarization of Rb 1−x (NH 4 ) x H 2 AsO 4 and Rb 1−x (ND 4 ) x D 2 AsO 4 (at x = 0.0; 0.08), as well as transverse permittivities ε a (T , 1 kHz) (for x = 0.0; 0.08; 0.4 in Rb 1−x (NH 4 ) x H 2 AsO 4 and x = 0.0; 0.08; 0.28 in Rb 1−x (ND 4 ) x D 2 AsO 4 ) were measured. It has been shown that at x = 0.08 in the temperature range between T g (x) and T c (x) the sample polarization is proportional to the contribution of the so-called lost dielectric response This indicates a presence of proton glass inclusions in the ferroelectric matrix at x = 0.08. Antiferroelectric phase composition region In this region the high-temperature proton glass phase exists at high temperatures, since the q EA parameter obtained from the NMR linewidths in the Rb 1−x (ND 4 ) x D 2 PO 4 system is different from zero and increases with decreasing temperature [39,59]. At lowering temperature, a phase transition to the antiferroelectric phase takes place at T N (x). The transition temperature T N (x) is maximal at x = 1, decreases with lowering x, and vanishes at a certain critical value of x, where the glass phase composition region begins. The obtained in [35] temperature dependence of the Raman scattering line, corresponding to ν 2 vibrations of PO 4 tetrahedra in Rb 1−x (NH 4 ) x H 2 PO 4 crystals at x = 0.8, has two bends at 130 K and 65 K. The first bend corresponds to T f (x) and to the start of the proton freezing on the O-H. . . O bonds, just like in the glass phase composition region. The second bend corresponds to the transition to antiferroelectric phase at T N (x), because below T N (x) the frequency ν 2 increases due to the formation of the NH 4 -PO 4 clusters. In [35] the two bends are also observed in the ferroelectric phase composition region at x = 0.2: the first one at T f (x), the second one at T c (x). Using the experimental data for transverse dielectric permittivity of [19,22,60] it has been established that ε 11 (T, ν) in antiferroelectric phase composition region at T > T N (x), just like in the glass and ferroelectric phase composition region, increases with lowering temperature, but the value of ε 11 (T, ν) here is somewhat larger. Near T N (x) a fast decrease of ε 11 (T, ν) takes place, which at x → 1 transforms into a break. At T < T N (x) ε 11 (T, ν) is much smaller than at T > T N and slightly decreases with lowering temperature. At x close to the glass phase composition region this decrease slows down, whereas the maximum of ε 11 (T, ν) at T N (x) becomes rounded, that is, the phase transition is smeared out. As has been shown in [60], ε 33 (T, ν) in K 1−x (NH 4 ) x H 2 PO 4 at x = 0.8 and 0.9 is qualitatively similar to ε 11 (T, ν), but twice smaller. This is the only experimental measurement of ε 33 (T, ν) in the antiferroelectric phase composition region, except for the case x = 1. 13706-5 In the antiferroelectric part of the phase diagram, the coexistence of deuteron glass and antiferroelectric phases in Rb 1−x (ND 4 ) x D 2 AsO 4 at (x = 0.39, 0.55, 0.69) was revealed [61] using the measured temperature and frequency dependences of ε 11 (T, ν). This coexistence is indicated by a weak frequency dispersion of the temperature dependence of permittivity at T 100 K (it is by two orders of magnitude smaller than in the region with the deuteron glass phase only at x = 0.28). In [62] by the example of the Rb 1−x (NH 4 ) x H 2 AsO 4 system, a possibility of phase coexistence (of PE -dynamically disordered paraelectric phase, PG -structurally disordered proton glass state, FE -ferroelectric, and AFE -antiferroelectric phases) in this type of compounds is explored. Experimental evidence for this coexistence at different x is presented. The temperature dependence of specific heat in the antiferroelectric phase composition region, as shown in [47] for Rb 1−x (NH 4 ) x H 2 PO 4 at x = 0.79 and 0.89, has two peaks: at T N and a much lower one at a few degrees below T N . The second peak remains unexplained. Considering the fact that the obtained results were not explained by their authors, and that a too high peak of the specific heat for these values of x was obtained, we can assume that these data are possibly unreliable. Unfortunately, for all compositions and for both dielectric permittivities the experimental data obtained in different papers are in a poor agreement. Let us consider here examples of such discrepancies. It should be noted that ε 11 (T, ν) and ε 33 (T, ν) were measured at different frequencies. However, these frequencies are low enough, so the dielectric permittivity hardly varies with frequency in this temperature range. Lots of other experimental data are available, which disagree within 10%. Such discrepancies can be explained by errors in measurements of ε 11 (T, ν) and ε 33 (T, ν), as well as by an incorrect determination of x. For example, the concentration of ammonium x in K 1−x (NH 4 ) x H 2 PO 4 depends non-linearly on its concentration in a solution during the sample growth [36]. The data for the temperatures T c in the ferroelectric phase composition region are also contradictory. Thus, T c of Rb 1−x (NH 4 ) x H 2 AsO 4 determined from the maximum of ε 33 (T, ν) in [17] are about 10 K larger than T c determined from the maximum of ε 11 (T, ν) in [16]. This means that the value of x is either overestimated in [17] or underestimated in [16]. Similar situation is observed for the experimental data for ε 11 (T, ν) in the antiferroelectric phase composition region. The values of ε 11 (T, ν) for Rb 1−x (NH 4 ) x H 2 PO 4 at x = 0.9 measured in [4] at cooling are by 20% larger than at heating and by about 10% larger than those obtained in [3]. The value of ε 11 (T, ν) measured in [60] for K 1−x (NH 4 ) x H 2 PO 4 at x = 0.8 is almost three times smaller than that found in [19]. Unfortunately, the experimenters who measured tensors of the dielectric permittivity did not comment on the discrepancies between their results and the previous measurements. We think that the major origin of these discrepancies is the difficulty of growing identical samples for a given x, because in these samples there are regions with different x. In spite of the quantitative differences, the qualitative behavior of the experimental curves of dielectric permittivities of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds is approximately the same. Therefore, very important are theoretical studies of these compounds. Theoretical studies of the Rb From the point of view of a theoretical description, the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds, which in a certain composition region can undergo a transition to the proton glass state, are quite similar to the magnetic compounds with a spin glass phase. Therefore, we can use the theoretical methods developed for the spin glass models. A detailed description of the proton glasses, however, is not possible within the spin glass models, since these models do not take into account the random electric fields and the real crystal structure of proton glasses. In [63,64] the Ising model in a transverse field with proton tunneling was explored. In [63] the interaction constants J ij = ±J were taken to be different from zero only for the nearest neighbors. In [64], as in the Sherrington-Kirkpatrick model [65], J ij are long-range ones and fluctuate with the Gaussian distribution. Calculations performed therein in the mean field approximation have shown that in both cases tunneling lowers down the temperatures of the transitions between the paraelectric and glass phases T g (q EA = 0 below T g ), as well as between the paraelectric and ferroelectric phase T c or antiferroelectric phase T N . In [45,66,67] the Ising model in a transverse field Ω i with a random internal longitudinal was explored, where E is a uniform external field. Gaussian distributions are used for the random infinite range interactions with ( In [66] within the replica symmetric approach, a system of equations for unknown p, q, r as well as expressions for the free energy, susceptibility χ, instability line of the replica symmetric solution (Almeida-Thouless line) are obtained and explored. Here α, β numerate the replicas, n is the total number of the replicas. It is shown that the temperature of the transition to the glass phase T g exists only at h 2 i c = 0 and corresponds to the peak on the temperature curve of χ(T ). The random internal field ( h 2 i c = 0) leads to the occurrence of the proton glass-like state at any temperature above T g (q EA > 0, q EA −→ T →∞ 0) and smoothes the peak in the temperature curve of at Ω i = 0. Its shape at high temperatures is close to the Gaussian one, whereas at lowering temperature or increasing h 2 i c it transforms into a two-peak curve with a minimum at h = 0. Such a shape of P (h) qualitatively agrees with the experimentally observed shape of EPR [24] and NMR [68] spectral lines. The temperature dependence of q EA calculated within the model [66] well agrees with the second moment of the distribution function of the EPR [24] and NMR [39,68] spectral lines. In [25,69] for the model with Hamiltonian (1.1) at Ω i = 0, using the Glauber equation, a shape of the EPR line was calculated (a single-peak one at high temperatures and a two-peak one at low temperatures) that agrees well with the experiment in a wide temperature range (T = [10 K, 150 K]). For this model, as shown in [45], q EA → 1 at Ω i = 0, T → 0. In the presence of tunneling (Ω i = 0) q EA < 1 at all temperatures, which means an incomplete freezing. In [67] the order parameter m and the parameter q EA for the model with Hamiltonian (1.1) are calculated by the replica method, and the phase diagrams at different values of the transverse 13706-7 field and of the random field dispersion are constructed. Since in the presence of random fields q EA > 0 at all T , the temperature of transition to the glass phase T g (x) here is introduced as a temperature below which the replica-symmetry solution is instable, that is, the replica symmetry is broken, and the system is in non-ergodic state. It is established that the random fields decrease the temperatures T g , T c , and T N and widen the glass phase region. It has been shown that between the glass and ferroelectric phases there exists a region where m = 0, and the replica symmetric solution is unstable; this region is called the region of coexistence of glass and ferroelectric phases. If in (1.1) the distribution function of the fields h i consists of two Gaussians, then a critical point appears on the phase boundary between the ferroelectric and paraelectric phases, whereas the transition between the ferroelectric and paraelectric phases becomes the first order one [70]. In [71] a dynamic generalization of the static approach of [66] has been presented. The Here an interaction of the pseudospins with the phonon thermostat is introduced into the Ising model Hamiltonian. This leads to the Debye-type relaxation [71] ε where polarization and the Edwards-Anderson parameter p, q obey the following system of equations For the relaxation time a phenomenological Arrhenius-like expression is assumed A quantitative comparison of the obtained results with experiment was performed for the temperature behavior of the ε (ν) peak only. It yielded It is claimed that the proposed simple approach can be useful for the description of dielectric properties of deuteron glasses. However, the relaxation theory of deuterated mixtures [71] based on this model does not yield a correct frequency dependence of the dielectric permittivity. The drawback of the above described calculations based on the Ising model with transverse field and random longitudinal field is that they do not take into account the real structure of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds. Also, the interactions considered therein are long-range ones (of the Sherrington-Kirkpatrick type), whereas in the real systems the major role is played by the nearest neighbor interactions. The first theory of the Rb 1−x (NH 4 ) x H 2 PO 4 mixtures that takes into account its real structure has been proposed in [72]. A pseudospin Hamiltonian was used to describe the energy levels of protons near the PO 4 groups; the critical lines T c (x), T N (x) (an expansion over the order parameter 2 ) were found in the cluster approach. A qualitative description of the experimentally observed phase diagram was obtained. Later the cluster approach was used in [73,74]. Thus, in [73] for the description of the Rb 1−x (NH 4 ) x H 2 PO 4 a pseudospin model was proposed that takes into account the configurational energy of the cluster of hydrogen bonds near a PO 4 group and a long-range interaction W Here ϕ cl,i are the cluster fields that take into account the interactions of i-th hydrogen bond with protons of the neighboring tetrahedra and are determined from the condition of the extremum of the free energy for the mixture of different phases. The Hamiltonian parameters U , V are related to the two lowest levels of the hydrogen cluster in RDP (ε 0 , ε 1 ) and ADP (ε 0 , ε 1 ) as The free energy is presented as a sum of the energies of three phases with the probabilities p + for the ferroelectric phase, p − for the antiferroelectric phase, and p 0 for the neutral phase. It is believed that the state of each tetrahedron is formed by the six ionic positions (Rb or NH 4 ). Two of these six positions are the closest; therefore, the ferroelectric (antiferroelectric) state of the tetrahedron is formed if they are occupied with Rb (NH 4 ). In other situations a neutral state is formed. From the analysis of the free energy expansion over the parameters S 1 + S 3 ; S 1 − S 3 the regions of ferroelectric (0 < x < 0.2 at T = 0) and antiferroelectric (0.75 < x < 1 at T = 0) phases on the phase diagram are found that are close to experimental. This model was used to describe the diagram of the state in the proton glass region (0.2 < x < 0.75 at T = 0) in [74]. Here the replica symmetric approximation was used in averaging the system free energy with a parameter, being an analog of the Edwards-Anderson parameter q = S f α S f β (α, β are the replica numbers). Analytical expressions for the partition function L(n, q) and temperature of the glass transition T g (n) (when q = 0) are found for the number of replicas n=2, 3, 4. For T g (n) an expression is found for an arbitrary n. Hence, an expression T g was obtained kT g h 2 Thus, no consistent approach to the description of all states of these compounds has been presented in [73,74]. An original approach to the description of thermodynamical properties of proton glasses has been proposed in [75][76][77]. The model Hamiltonian contains terms responsible for the ferroelectric ordering along the Z axis (S z -components of the classical spin) and for the antiferroelectric ordering (S x -components). Restricting the consideration by the quadratic in the Hamiltonian terms at averaging the system free energy over the concentrations by the replica method, in the replica symmetric approximation, a system of equations was obtained for the parameters of ferroelectric p and antiferroelectric ξ ordering, as well as parameters of the short-range ordering g z , g x (correlation between the nearest dipole moments) Here 1, 2 are the sublattices of the site i; . . . c means configurational averaging. The constructed phase diagram for Rb n (NH 4 ) 1−n H 2 AsO 4 qualitatively agrees with experiment. At high temperatures (T 210 K) p = 0, ξ = 0, whereas for g z,1 , g x,1 there exist single solutions that correspond to the paraelectric region. The proton glass region is associated with the appearance of other solutions for g z , g x at p = 0, ξ = 0 (at low temperatures the maximal number of solutions is equal to 5). Fluctuations of the dipole moments are described by the averages amongst the dipole moments of the nearest spins g z , g x . The self-correlations of the dipole moments of the S z i1 S z i1 c type, measured in EPR or NMR experiments as the Edwards-Anderson parameter, are not taken into account in this approach. We believe that such correlations are more important than the correlations between the neighboring tetrahedra. Fluctuations of the deformational internal field, that can be estimated from the temperature dependence of the Edwards-Anderson parameter, are not taken into account in this approach either. Hence, a theoretical description of thermodynamic and dielectric properties of hydrogen bonded compounds of the Rb 1−x (NH 4 ) x H 2 PO 4 type which can undergo a transition into the proton glass state, that would take into account the structural peculiarities and different types of interactions, is still a complicated and unsolved problem in statistical physics. Particularly it concerns a microscopic description of the dynamical properties of these mixtures. The temperature curves of the real and imaginary parts of the longitudinal and transverse dielectric permittivities at different frequencies have to be described. Of particular interest the possibility is to explore the low-temperature curves of the imaginary parts of dielectric permittivity at low frequencies. In [78][79][80][81] a theory of static characteristics of model proton glasses with an arbitrary range of competing interactions has been proposed. In [82,83] a similar approach has been used for the description of some thermodynamic characteristics and transverse dielectric permittivity of hydrogen bonded Rb 1−x (ND 4 ) x D 2 PO 4 and Rb 1−x (NH 4 ) x H 2 AsO 4 compounds, in which an essential role in the formation of energy levels is played by the proton short-range correlations. The goal of the present paper is to calculate the thermodynamic characteristics, the longitudinal and transverse dielectric permittivities of these compounds at different temperatures, concentrations, and frequencies, as well as to determine their phase diagrams. Thermodynamic properties of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds It is well known, that for description of thermodynamic characteristics and dielectric properties (in a certain frequency range) of these crystals within the pseudospin-phonon model, the ionic variables can be excluded in the static approximation ( [84,85]). The system description is then performed within the framework of a pseudospin model with renormalized moments of hydrogen bonds d f,α (α = + for RDP, α = − for ADP) Here we introduced an effective dipole moment of a tetrahedron P α ; . . . is the conventional Gibbs' thermodynamic average; summation f = A(B) is carried out over the bonds, on which the protons order close to the given tetrahedron A(B). For RDP the tetrahedron polarization can have two opposite values along the c axis, when two protons are ordered close to the upper edge of the tetrahedron (η f = η) and close to the lower one (η f = −η) For ADP-NH 4 H 2 PO 4 the primitive cell is twice as large as for RDP, and in addition to "A", "B" tetrahedra it contains "A ", "B " tetrahedra. Since their polarizations are opposite to those of "A", "B", the total cell polarization is zero: 3) 13706-10 For an ADP-NH 4 H 2 PO 4 crystal the change of sign of η f,− at transition to the "A ", "B " tetrahedra can be taken into account as (here n the RDP primitive cell vector; k z * is the vector at the Brillouin zone boundary directed along Z) Hence, in the cases of both ADP and RDP we use a primitive cell with "A" and "B" tetrahedra. Hamiltonian of a mixed Rb 1−x (NH 4 ) x H 2 PO 4 system can be written as Here S nf = ±1 are spin operators describing the position of a proton on the f = 1, 2, 3, 4 hydrogen bond in the n cell at the R tetrahedron; E is an external uniform electric field; G n is an internal random deformational field; J nf,n f is the long-range interaction between protons; H A (n), H B (n) are the configurational energies of the "A", "B" tetrahedra. In this work we take into account two configurational states of a tetrahedron (α = +, −): In the state +, the energy states of a tetrahedron are analogous to those in a pure RDP crystal with the ground state level ε s+ In the state -(ADP) we use the same relations for V α , U α , Φ α but with different values of ε α , w α , w 1α . In the case of a mixed Rb 1−x (NH 4 ) x H 2 PO 4 crystal, ionic positions are occupied by Rb with the probability c + = 1 − x and by NH 4 with the probability c − = x. Hence, the distribution function of a strongly random energy parameter ε α (and similarly for w α , w 1α ) can be qualitatively written as A state of the dipole moment on the bond d f,αα f is determined by the states α, α f of two tetrahedra connected by this bond. In the mean field approximation over the bonds, the averaged over configurations moment of a tetrahedron P B In the present work we consider only two realizations of the sets of averaged over configurations values ofη f =η; −η B 1,− =η B 2,− =η B 3,− = −η B 4,− =η, which correspond to ferroelectric and antiferroelectric ordering. This permits us to use the primitive cell of RDP with 2 tetrahedra and 4 hydrogen bonds. The mean free energy per primitive cell F can then be written as 13706-11 where k * = 0 * for ferroelectric ordering F We use the following notations for the averages over different random fields of the single-particle F (2.12) Here we introduce notations for the average values of clusterφ f and long-rangeφ L,f fields, and Averaging is performed over random cluster fields with dispersion q and over random deformational fields with dispersion G 2 c for transverse and longitudinal field components 14) The expressions for the single-particle function F (0) f and its derivatives F (n) f are as follows 4 , F [21] f f = −2F [1] f F [11] f f , F [21] f f = −2F [11] f f F [1] f , F [22] f f = −2F Here the partition function 0.5L ({ξ} ||R α ) is calculated with the cluster Hamiltonian We shall use the same model dependence of the average eigenvalues of the long-range interaction matrix as for the dipole moment of a hydrogen bond: (2.20) ν 1 =J 11 + 2J 12 +J 13 ,ν 2 =ν 4 =J 11 −J 13 ,ν 3 =J 11 − 2J 12 +J 13 . (2.21) From the condition of the free energy extremum we find an expression for the averageη f = S f c , reduced Edwards-Anderson parameter Q EA,f , and an equation for unknown quantities In the absence of external field and for the ferroelectric ordering we obtain the following expressions for the free energy, for the averageη =η f , reduced Edwards-Anderson parameter Q EA = Q EA,f and for equations forφ L ,φ, q (2.23) 13706-13 In the case of an antiferroelectric ordering in the absence of external field, the free energy, for the averageη = −η 1 =η 2 , reduced Edwards-Anderson parameter Q EA = Q EA,f and equations for ϕ L ,φ, q read As numerical calculations for the free energy show, the antiferroelectric state is realized in the region close to the x = 1−c → 1 limit; the ferroelectric state is realized in the region 1−x = c → 1, and a proton glass state (φ =φ L = 0, q > 0) takes place at intermediate compositions. The averaged matrices of the second derivatives F [11] c ; F [22] c for the antiferroelectric phase are of the same symmetry as the matrixφ , and the eigenvalues of these matricesF [11] µ ;F [22] µ are written as linear combinations similar toφ µ . Symmetry of the matrix F [21] c is the same as ofq ; after the unitary transformation it becomes analogous to the antidiagonal matrixq with the corresponding elementsF [21] µ . The matrix F [12] c is transposed to F [21] c . After the unitary transformation and exclusion of the parametersφ µ ,q µ we obtain expressions for the correlators η µ , entering the expression for the system susceptibility. In the case of the ferroelectric ordering the matricesη ,φ ,q , F [nn ] c have the same symmetry. As a result, we obtain the same expression forη µ , except that for the eigenvaluesF [12] µ we have to use the linear combination like forφ µ . Optimal sets of model parameters Using the obtained in previous sections expressions, let us evaluate the dielectric and thermal characteristics of the Rb 1−x (ND 4 ) x D 2 PO 4 and Rb 1−x (NH 4 ) x H 2 AsO 4 compounds and compare them with the corresponding experimental data. Values of the theory parameters should provide the best possible fit to the experiment. The found sets of the model parameters for the mixtures Rb 1−x (ND 4 ) x D 2 PO 4 (T c (x = 0) = 235 K, T N (x = 1) = 242 K), Rb 1−x (NH 4 ) x H 2 AsO 4 (T c (x = 0) = 110 K, T N (x = 1) = 216 K) are presented in tables 1-4, respectively. The dashes in the tables mean that the given tetrahedron is averaged over two states only (without the neutral state 0 (Glass)). Spontaneous polarization The calculated temperature curves of spontaneous polarization for Rb 1−x (ND 4 ) x D 2 PO 4 and Rb 1−x (NH 4 ) x H 2 AsO 4 compounds along with the available experimental data are shown in figure 2. The calculated dependences P s (T ) well describe the experimental data at x = 0. With increasing x the theory predicts a decrease of spontaneous polarization, until it completely vanishes at the concentration corresponding to the transition into the glass phase composition region. The temper- 13706-16 Thermodynamics, dielectric permittivity and phase diagrams of proton glasses atures, at which the spontaneous polarization arises in the ferroelectric phase, or the spontaneous sublattice polarization arises in the antiferroelectric phase, at different x yield the T c (x) or T N (x) dependences, respectively. Let us note that at small x the saturation polarization is almost independent of x (curves 1 and 2 for all compounds), even though the order parameterη(x, T ) at small T decreases with x. As seen from equation (2.9) the polarization is determined by the product d z cη . For all explored compounds the following relation is obeyed d z − (F ) > d z + (F ), and the average d z increases with x, whereas the low-temperature polarization of a tetrahedron is almost independent of x. With increasing x the parameterη rapidly decreases at low T , which leads to a rapid decrease of saturation polarization. Molar specific heat The experimental points for the proton contribution ∆C p to the specific heat of the considered systems should be determined by subtracting the lattice contribution from the measured specific heat; the lattice contribution in the phase transition region is approximated by a linear dependence. The proposed theory, as seen in figures 3-4, properly describes the temperature dependence of proton contribution to the molar specific heat of the Rb 1−x (ND 4 ) x D 2 PO 4 and Rb 1−x (NH 4 ) x H 2 AsO 4 compounds at x = 0 and x = 1. At compositions other than x = 0 or x = 1 the theory predicts a decrease of the jump of specific heat at T c and T N and its vanishing at x in the proton glass composition region. To answer the question about the validity of the proposed theory for the Rb 1−x (NH 4 ) x H 2 PO 4 type systems, further experimental investigation of the temperature dependences of specific heat of these crystals in a wide composition range are required. The reduced Edwards-Anderson parameter The reduced Edwards-Anderson parameter Q EA (T ) of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds is different from zero at all temperatures and concentrations x, except for x = 0 and x = 1 ( figure 5). Let us note that the temperature and composition dependences of Q EA (T ) are similar for all compounds. The parameter Q EA (T ) has a rounded peak at transition from the high-temperature paraelectrical phase to the ferroelectric phase, whereas it rapidly falls to zero at transition to the antiferroelectric phase. The parameter Q EA (T ) is the largest in the proton glass phase composition region and increases with decreasing temperature. For Rb 1−x (ND 4 ) x D 2 PO 4 at x = 0.22 the theoretical curve 3 ( figure 5 (b)) satisfactorily describes the experimental data of [39]. At the same time, at x = 0.44 our calculations agree with the data of [68], but the obtained values 45 -4. are lower than those of [39] both for x = 0.44 and x = 0.22. We believe that this can be explained by an incorrectly determined composition x of the samples in [39]. Unfortunately, no experimental data for Q EA (T ) were available for Rb 1−x (NH 4 ) x H 2 AsO 4 . Longitudinal dielectric permittivity The temperature dependence of transverse permittivity for the Rb 1−x (ND 4 ) x D 2 PO 4 and Rb 1−x (NH 4 ) x H 2 AsO 4 compounds was explored in previous papers [82,83]. An essential difference between these quantities arises only in the proton glass composition region and at temperatures below the maximum of ε 33 (T, ν). Here ε 33 (T, ν) even at small ν always tends to ε 0 33 , whereas the theoretical static permittivity ε 33 (T, 0) at T → 0 tends to a certain finite value, larger than ε 0 33 . At high temperatures the static and dynamic permittivities practically coin- 13706-18 Thermodynamics, dielectric permittivity and phase diagrams of proton glasses cide; this permits us to talk about qualitative agreement or disagreement between the theoretical curves for ε 33 (T, 0) and experimental points for ε 33 (T, ν = 0). In figure 6 we show the temperature dependence of the longitudinal permittivity ε 33 (T, ν) of 7) we have the antiferroelectric ordering region. Let us note that for x = 1.0 the agreement with experiment for ε 33 (T ) (curve 7') would be slightly better if a different set of the model parameter values was used [85]. In figure 7 the calculated longitudinal static permittivity ε 33 (T ) for Rb 1−x (NH 4 ) x H 2 AsO 4 is compared with the experimental data for ε 33 (T, ν = 0) for different compositions x at low frequencies ν. In the ferroelectric phase composition region (x = 0; 0.08; 0.13) the static theory correctly describes the parts of the curves above T c (x) as well as the position of the maximum of ε 33 (T, ν → 0), but their values in the vicinity of the peak are much larger than experimental ones. This peak can be smeared out and lowered down, if we take into account macroscopic fluctuations of concentration x as well as the piezoelectric effect. In the proton glass composition region the theory and experiment coincide quantitatively at temperatures above the peak of ε 33 (T, ν). At [10]; 52 GHz -6, 6', [10]; 150 GHz -7, 7', [10]. At low temperatures the experimental ε 33 (T, ν) rapidly decreases, because it is measured at non-zero frequencies. This decrease is qualitatively correctly described by the calculated real part of the dynamic permittivity ε 33 (T, ν) in the glass phase composition region, as shown in figure 8 for In the glass phase composition region the maximum of ε 33 (T, ν) (approximately coincides with the low-temperature inflection point of ε 33 (T, ν)) corresponds to the temperature, at which the relaxation time is close to the field period. For Rb 1−x (ND 4 ) x D 2 PO 4 at x = 0.5 the calculated real and imaginary parts of ε 33 (T, ν) at different frequencies satisfactorily describe the experimental data. The theory yields a faster decrease than the experiment for ε 33 (T, ν) and a narrower and higher peak for ε 33 (T, ν). We attribute this drawback to the imperfect procedure of configurational averaging of the susceptibility. In the case of Rb 1−x (NH 4 ) x H 2 AsO 4 the calculated imaginary part of ε 33 (T, ν) has a very narrow and high peak. This discrepancy can be possibly caused by the tunneling effects, essential in undeuterated compounds, which are not taken into account in our calculations performed within the Glauber dynamics approach. At high temperatures the frequency dependence of the complex permittivity ε(T, ν) is close to the Debye type (figure 9). At low temperatures the Debye-type behavior disappears. In the imaginary part of the permittivity a clear two-peak structure of the dielectric spectrum is observed. In the antiferroelectric phase the low-frequency peak is less pronounced. We also calculated ε aa (T, ν), ε aa (T, ν) (a = 1, 3) in the regions of ferroelectric and antiferroelectric ordering. At low frequencies and at temperatures near and above T c (x) ε aa (T, ν) practically coincides with the static permittivity ε aa (T ). At low temperatures ε 33 (T, ν) has a peak (correspondingly, ε 33 (T, ν) has a bend ) (see figure 10 for x = 0.2). With lowering x the temperature position of this peak in ε aa (T, ν) practically does not change, but its height rapidly decreases. We failed to find this peak numerically at x < 0.15. A similar peak is detected in the antiferroelectric phase region at 0.65 < x < 0.70. Let us note that for the same frequency ε 11 (T, ν) < ε 33 (T, ν) for all concentrations x, what agrees with the experimental data. It should be noted that both for the transverse and longitudinal permittivities the best description of experimental data is obtained in the regions of the so-called "pure" phases, that is x → 0, x → 1, and the glass phase region at x ∼ 0.5 for Rb 1−x (ND 4 ) x D 2 PO 4 and x ∼ 0.35 for The proposed here approach can be used to describe of the dynamic characteristics of Rb 1−x (ND 4 ) x D 2 PO 4 compounds and to evaluate the qualitative behavior of the permittivities of Rb 1−x (NH 4 ) x H 2 AsO 4 . Phase diagrams The phase diagrams of the Rb 1−x (NH 4 ) x H 2 PO 4 system are constructed, using the calculated physical characteristics of the crystals. The following regions are present in these diagrams: HP (high-temperature region of paraelectric phase), LP (low-temperature region of paraelectric phase), F ( ferroelectric phase), AF (antiferroelectric phase) (figures 11, 12). - [12], - [13], - [49]. The solid lines are the Tc, TN, and Tg transitions obtained from the maxima of ε33(T ) and ε11(T ). The dashed lines are the Tg lines obtained from the maxima of ε 33 (T, ν) and ε 11 (T, ν) at frequency 1MHz. Typical peculiarities of the phase diagrams of the considered compounds will be discussed by the example of Rb 1−x (ND 4 ) x D 2 PO 4 ( figure 11). At high temperatures the system is in paraelectric phase. It region is designated like the HP, because here the reduced Edwards-Anderson parameter Q EA is small but different from zero and decreases with increasing temperature. For x < 0.2 and x > 0.65 a spontaneous polarization or sublattice spontaneous polarization arise at T < T c (x) and T < T N (x), respectively. As a result, the system goes to the ferroelectric or antiferroelectric state. Here the reduced Edwards-Anderson parameter Q EA can be significant (figure 5) in vicinities of T c (x), T N (x) and in ferroelectric phase for x close to glass phase composition region. In the central composition region we designate the low-temperature region of paraelectric phase. This region lies below the maxima of the static permittivities ε 11 (T ) and ε 33 (T ) (the solid lines in figures 11,12) and attributes large value of Q EA . The dashed lines (T g,11 (x, ν) and T g,33 (x, ν)) correspond to the low-temperature peaks of ε 11 (T, ν) and ε 33 (T, ν) at ν = 1 MHz for Rb 1−x (ND 4 ) x D 2 PO 4 (the so-called freezing lines).These lines continue in the regions x < 0.2 and x > 0.65, where the paraelectric (or the proton glass) phase possibly coexists with ferroelectric or antiferroelectric phases, respectively. Numerical calculations show that T g,11 (x, ν) → 0 and T g,33 (x, ν) → 0 at ν → 0, so within the framework of our theory the averaged relaxation times for longitudinal and transverse permittivity have an Arrhenius-like temperature behavior that is T 0 = 0 (Vogel-Fulcher temperature). It should be noted that the approximation for the averaged relaxation times based on the experimental data ( [10]) gives the value T 0 ≈ 32 K for x = 0.5. The experimental points of [49] presented in this phase diagram were obtained by NMR studies. The phase diagram of Rb 1−x (NH 4 ) x H 2 AsO 4 is strongly asymmetric (figure 12), and the proton glass composition region exists at x = (0.2; 0.45). The freezing lines T g,11 (x, ν) and T g,33 (x, ν) (dashed lines) correspond to the maxima of ε 11 (ν = 30 kHz,T) and ε 33 (ν = 30 kHz,T). The approximation on the basis of experimental data ( [17]) gives the value T 0 ≈30 K for x=0.36. According to the experimental data [18,94] T g,11 (x, ν) is observed in the ferroelectric phase down to x = 0.01. Also T g,11 (x, ν) → 0 with decreasing x. The calculations yield the freezing line down to x ∼ 0.15. Overall, the calculated phase diagram correctly describes the available experimental lines, even though some discrepancies are present. Thus, at the accepted values of the theory parameters the glass phase composition region is somewhat wider x ∼ [0.18; 0.46] than the experimental one x ∼ [0.22; 0.42]. This difference can be related with an incorrectly determined concentration x in experimental samples. Conclusions In the framework of the four-particle cluster approximation for the short-range interactions and the mean field approximation for the long-range interactions, we explored the free energy, a system of equations for variation parameters, expression for spontaneous polarization, Edwards-Anderson parameter, molar specific heat, longitudinal and transverse dielectric permittivities of the Rb 1−x (ND 4 ) x D 2 PO 4 and Rb 1−x (NH 4 ) x H 2 AsO 4 compounds for all compositions x. The theoretical results are compared with experimental data. In the ferroelectric phase composition region, the spontaneous polarization decreases with increasing x and vanishes at the transition to the glass phase region. The molar specific heat of the Rb 1−x (NH 4 ) x H 2 PO 4 type compounds in the regions of the ferroelectric and antiferroelectric phases has jumps, which vanish at the transition to the proton glass phase composition region. The Edwards-Anderson parameter is different from zero at all compositions 0 < x < 1 and temperatures, which is explained by the internal random deformational fields For the Rb 1−x (ND 4 ) x D 2 PO 4 mixture the proposed theory satisfactorily describes the temperature curves of the real and imaginary parts of the longitudinal and transverse permittivities in the regions of "pure" phases (x ∼ 1, 0.5, 0). At the same time, for Rb 1−x (NH 4 ) x H 2 AsO 4 at low temperatures in the glass phase composition region the theory incorrectly describes the shape of the imaginary part of the permittivity curves ε aa (T, ν) (the theoretical peak is too narrow and too high). This is partially caused by the neglected within the Glauber approach tunneling of protons, which plays an essential role in the dynamic processes in these systems at low temperatures. It is established that in this model the dynamics in the proton glass composition region is of the Debye relaxation type only at high temperatures. In our model the temperature curves of the averaged relaxation times for longitudinal and transverse permittivities for proton-glass composition region at T → 0 are close to the Arrhenius law. The phase diagrams constructed using the calculated dielectric characteristics are close to the experimental ones. The absence of reliable experimental data for the physical characteristics of the Rb 1−x (NH 4 ) x H 2 PO 4 type proton glasses in a wide composition range poses huge difficulties in verifying the validity of the proposed theory. Possible further improvements of the theory of proton glasses also require reliable experimental data for the temperature dependences of all the calculated characteristics of these crystals in a wide composition range.
12,649
2010-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
SMORE: Synteny Modulator of Repetitive Elements Several families of multicopy genes, such as transfer ribonucleic acids (tRNAs) and ribosomal RNAs (rRNAs), are subject to concerted evolution, an effect that keeps sequences of paralogous genes effectively identical. Under these circumstances, it is impossible to distinguish orthologs from paralogs on the basis of sequence similarity alone. Synteny, the preservation of relative genomic locations, however, also remains informative for the disambiguation of evolutionary relationships in this situation. In this contribution, we describe an automatic pipeline for the evolutionary analysis of such cases that use genome-wide alignments as a starting point to assign orthology relationships determined by synteny. The evolution of tRNAs in primates as well as the history of the Y RNA family in vertebrates and nematodes are used to showcase the method. The pipeline is freely available. Introduction A precise record of the history of a gene family, that is, an accurate reconstruction of a phylogenetic gene tree, is an indispensable prerequisite for a detailed description of the functional evolution of its members and the assessment of innovations [1,2]. The exact placement of gene duplication and gene-loss events relative to a species tree is also of key importance in the context of forward genomics [3]. The first crucial step towards elucidating the history of a gene family is to distinguish orthologs, that is, gene pairs that originated from a speciation event, from paralogs, which arose by gene duplication [4]. A large arsenal of computational methods has become available to determine orthology. These tools either compute a gene phylogeny from aligned sequences and subsequently reconcile the gene tree with a species tree; otherwise they use a "reciprocal best match" rule [5,6]. We refer to [7][8][9][10][11] for reviews of the topic and benchmarks of the most commonly used tools. Both approaches assume that genes evolve essentially independently so that sequence divergence is a faithful measure of evolutionary distance. Multicopy genes sometimes violate this assumption in a very strong way. Concerted evolution [12,13] may cause paralogous genes to maintain essentially identical sequences over long evolutionary time scales. The underlying mechanism is primarily homologous recombination, which leads to gene conversion, in which, a piece of the sequence from one copy of the gene effectively overwrites a homologous region in another copy. Unequal crossover between repeating units and gene amplification are also important contributors (e.g., [14]). Gene conversion is responsible for preventing the divergence of the individual copies of transfer ribonucleic acids (tRNAs) [15], small nuclear RNAs (snRNAs) [14], the ribosomal RNA (rRNA) cistron [16], and the histone genes [17]. Paralogous genes can escape from concerted evolution [18] and then rapidly accumulate mutations typically leading to a loss of function and hence eradication from the genomic record. Together, these processes can result in a rapid net turn-over of gene copies and sometimes large differences in the number of copies in closely related genomes. This effect has been studied in much detail, in particular for the case of tRNAs [19][20][21][22][23]. Because paralogous sequences are essentially identical, it is not possible to identify orthologs of genetic elements that are subject to concerted evolution by means of sequence comparison. Synteny, however, provides a potentially powerful means of discriminating between orthologous loci. Reliable information of synteny can be obtained whenever there are unique sequence regions in close genomic proximity to the locus of interest. Here, orthology can be established with high confidence among related species. The conservation of proximity to such independently evolving regions can then be used to distinguish orthologous from paralogous copies of the ambiguous sequence element. This idea has been exploited in the past, in particular as a means of tracing the evolution of tRNAs [19][20][21][22]. In [23], we explored its implication in some detail and proposed a more systematic conceptual workflow for the evolutionary analysis of multicopy genes that can use genome-wide multiple sequence alignments (MSAs), many of which are already publicly available, as a source of synteny information. In the present contribution, we describe an implementation of a fully automatic computational pipeline that serves as a convenient tool for this purpose, and we describe applications to two classes of non-coding RNAs (ncRNAs). The origin of tRNAs was from before the separation of the three domains of life. There is clear evidence, furthermore, that all tRNA genes are homologs, derived from an ancestral "proto-tRNA" [24], which in turn may have emerged from even smaller components [25]. These are indispensable in all organisms. In addition to their ancestral role as mediators of the genetic code (e.g., [26]), tRNAs have secondarily acquired additional functions, reviewed, for example, in [27,28]. Beyond bona fide tRNAs, there is a rich universe of tRNA-derived repetitive short interspersed nuclear elements (SINEs) [29] and small RNAs that either directly derive from tRNAs [30,31] or arose indirectly as exapted SINEs [32]. Multiple identical copies, often large numbers of pseudogenes, and rapid, lineage-specific expansions of particular families are typical for tRNA evolution, at least in Eukarya [19,33]. Among the elements under concerted evolution, tRNA genes are the most widely studied elements. They show a rapid turnover as the consequence of frequent seeding of new loci compensated for by high rates of pseudogenization [19][20][21][22]. While gain and loss events can be estimated from changes in the total number of paralogs with often acceptable precision for low-copy-number gene families such as microRNAs [34], this is not the case for tRNAs, as the number of conserved tRNA loci very quickly decreases with phylogenetic distance [19,23]. The second example are mammalian Y RNAs. Like tRNAs, Y RNAs are pol III transcripts [35]. They form the RNA component of Ro ribonucleoprotein (RoRNP) particles [36,37]. The molecules exhibit a characteristic secondary structure that has been extensively studied in the past [38,39]. They are essential for the initiation of chromosomal deoxyribonucleic acid (DNA) replication in vertebrates [40], likely in conjunction with the origin recognition complex [41]. As part of the RoRNP, they are involved in RNA stability and cellular responses to stress [42]. In addition, small RNA fragments are enriched in apoptotic cells [43]. The evolution of Y RNAs has been studied in some detail in [44], indicating a single, evolutionary conserved genomic cluster comprising four paralog groups designated Y1, Y3, Y4, and Y5. With the notable exception of mammals, which harbor on the order of 1000 Y RNA-derived retro-pseudogene sequences [45], most other vertebrates show only a few Y RNA-derived pseudogenes. Overview The pipeline is composed of two modular parts: (i) the inference of the orthology relation, and (ii) the quantitative analysis of the orthology relation (see Figure 1). The first component identifies a map of genomic anchor points that are used to partition the annotated elements of interest into an initial set of candidate clusters. These are then processed to account for the most common artefacts in the input data and are refined using information that is provided by analyzing related but distinguishable sequence elements together. The second part of the pipeline is largely independent of the first and can also be employed using input data generated by other, third-party methods. With our pipeline, we provide an uninterrupted workflow that returns results based on input files and user-defined parameters. With the exception of breaks between subcommands indicated in Figure 1 and where output data is provided for the user, UNIX pipes are utilized to transfer data between software components. Figure 1. Summary of the computational workflow implemented in the Synteny Modulator Of Repetitive Elements (SMORE) pipeline for analyzing the evolution of mutlicopy genes. The compilation of orthology estimates and the quantitative analysis are logically separated and can also be used independently of each other; see text for details. The blue box describes options for input data. Black arrows pointing toward the next step of the pipeline (to the right) show an uninterrupted workflow and hence no printing or reading of files in between single steps of the pipeline. Black arrows pointing downward indicate output files that are always part of the output, whereas blue arrows pointing downward indicate the creation of temporary files and of optional output for the user. Annotation of the Loci of Interest In this contribution, we discuss two showcase examples. In each case, the first step is the identification of the loci of interest. Different tools and initial data have been used. We employed tRNAscan-SE [46] to annotate nuclear tRNA genes in up to 10 mammalian genomes. We identified Y RNA genes starting from the Y RNA sequences reported in [44] for mammals and a sequence alignment in [47] for nematode genomes. For the mammalian sequences, we first constructed a MSA together with a consensus secondary structure using mlocarna [48][49][50]. To this end, we used Infernal [51] to generate and calibrate covariance models, on the basis of both multiple alignments. In the final step, Infernal was used to identify significant matches in the genome. The alignment of Y RNA sequences and information on the investigated genomes can be found in the Supplementary Materials S1 and S2, respectively. Genomic Anchors A key step in our workflow is the identification of genomic anchors. Following [23], we define a genomic anchor as a sequence interval for which orthology between pairs of genomes can be established without ambiguity. As it is key to our approach, we briefly review the concept here in more formal terms: Given a genetic element g A of interest in species A, we make two assumptions: 1. For the genetic element g A , we can find two flanking regions p A and q A that have orthologous counterparts p B and q B in species B on the basis of sequence similarity. 2. On the basis of genomic coordinates, the order of the sequences is determined such that p A < g A < q A and p B < q B . As orthologous counterparts of genomic anchors might not be present in all species of interest, we define tight anchors. Here, we take the closest possible anchors for a given element g A in species A despite that there are no orthologous anchors in any other species. This ensures that the definition of orthologous genomic regions is as highly resolved as possible in the first step. The nature of genomic anchors is irrelevant and can be any sequence block. Our starting point for the computation of genomic anchors is a MSA. We emphasize that MSAs in general do not correctly align multicopy genes, as well-conserved multicopy elements are often used for the generation of anchors for the MSA itself. This creates artefacts, because the initial alignment step by construction cannot distinguish between the individual copies of a family of loci that is subject to concerted evolution. We refer to [23,52] for a more extensive discussion of this issue. For mammals, we used the MultiZ alignment of 19 mammalian genomes with humans [53] and for nematodes, and the MultiZ alignment of 25 nematode genomes with Caenorhabditis elegans [54], downloadable through the University of California Santa Cruz (UCSC) Genome Browser. As a result of the duplicated genome regions and the presence of other multicopy elements, not all alignment blocks reported in the initial MultiZ alignments can meaningfully serve as genomic anchors. We therefore eliminated all genomic anchors, also called multiple alignment format (MAF) blocks, that overlapped with any element of interest or other MAF blocks of the MSAs. In the final step, the MAF blocks immediately upstream and downstream of each annotated occurrence of an element of interest are compiled. Together, they form the anchor map for the family of genetic elements in question. Candidate Clusters of Co-Orthologous Genes The anchor map partitions the set of genetic elements into groups of potential co-orthologs. More precisely, we make the simplifying assumption that no genomic arrangement has occurred between the tight anchors enclosing an element. An initial set of clusters is obtained by combining only sets of elements that share the same pair of anchors. As shown in Figure 2, this may lead to (i) clusters that contain multiple elements from the same species, and (ii) the separation of elements into different clusters because of a lack of common anchors. The first case likely identifies in-paralogs, that is, recent duplications in one species. The second case may arise from deletions of the anchor elements in some species. More likely, however, it is associated with missing data or assembly artefacts. In the initial partition, this often produces a large number of singletons, which would lead to a substantial underprediction of orthology. To account for these issues, we post-process the initial clusters. In order to deal with missing anchors, we join clusters C and C that are located within a user-defined maximum distance from an anchor, if they satisfy the following conditions: 1. The relative genomic order of the elements in each cluster is the same. 2. There are no elements belonging to another cluster between the the elements of C and C . 3. The total extension of the merge cluster C ∪ C does not exceed a user-defined threshold. Counting Events Using Relaxed Adjacency Conditions A less strict way of joining clusters is to require adjacency conditions of genomic anchors by only considering species that are involved in the clusters to join. Hence, we make sure that the clusters to join are joinable in all species that have an element in any of the considered clusters. In this way, we keep the syntenic orthology relation for the clusters and ignore species that do not appear in the relation. This leads to small changes of the estimated numbers of events, primarily as a result of the reduction of the number of singleton loci. On average, therefore, the numbers reported for duplications and insertions increases. Orthologs The resulting partition still may contain non-orthologous elements. In the case of tRNAs, for instance, the annotation generated by tRNAscan-SE only distinguishes anti-codon classes. These still may comprise multiple, discernible families. We therefore construct, for each cluster, a graph G = (V, E) whose vertices are the annotated elements that belong to the cluster. An edge is drawn between two elements v and w if their sequences are more similar than a certain threshold. In the case of tRNAs, values of 80% to 90% sequence identity have proved useful [23]. This value needs to be set as specifically dependent on the typical sequence conservation of the elements under consideration and on the phylogenetic range of interest. The graph G represents the orthology relation within a given cluster (see Figure 3A for an example). As shown in [55], the graph G should be a co-graph; that is, it must not include a path P 4 on four vertices as an induced subgraph. If G is constructed from the sequence data using fixed thresholds for sequence similarity, it will sometimes violate the co-graph property. Nevertheless, it provides a good approximation. The initial graph G can be corrected by inserting or deleting the minimal number of edges that is required to restore the co-graph property. Although co-graph editing is known to be a difficult problem (the corresponding decision problem is non-deterministic polynomial-time hard (NP-hard) [56]), it remains tractable for sizes of candidate graphs that we typically encounter. Figure 3. Example of the graph G for a cluster consisting of two groups of orthlogous elements in two species S and T (A). Thick edges indicate above-threshold sequence similarity. The dashed edge, which was included initially, must be inserted to correct G; otherwise T5-S4-T3-S1 would form a P 4 . Modified Needleman-Wunsch alignment for graph G (B). The inserted edge to correct for a co-graph is now part of the thick edges showing the orthology relation. The alignment removes crossing edges of the orthology graph and detects duplications (dashed edges). The edge attached to node T 1 indicates a deletion in species S as there is no target node for this edge. The possibly edited graph G may still overpredict orthology in cases for which a cluster contains multiple types of elements that are distinguished by similarity. In such cases, the order relative to dissimilar elements may subdivide the ortholog clusters of G . To utilize this order information, we consider an alignment of the element that (i) preserves their genomic order, and (ii) allows matches only between elements that are connected by edges in G . This variation of the alignment problem is solved by a variation on the well-known Needleman-Wunsch alignment algorithm [57] that also allows duplications of elements (see Figure 3B for an example). As explained in Figure 3, the modified Needleman-Wunsch algorithm removes crossing edges and allows duplications. The exclusion of crossing is an intrinsic property of alignments and is the reason for choosing this type of approach here. More precisely, alignment algorithms compute maximum weight matchings that preserve the prescribed order in both sets, when presented with two linearly ordered sets of objects and a weighted bipartite graph of allowed matches of pairs of objects from different sets. The modified version of the Needleman-Wunsch algorithm employed here extends the match case in such as way that an element in one set may also be matched with one or more consecutive objects in the other set. We refer to [23] for the details on the dynamic programming solution to this problem. Quantitative Analysis of Evolutionary Events Taken together, the construction of the orthology relation outlined above provides, for each final orthology graph, information on (i) the first appearance of the ortholog group, (ii) duplication events, and thus (iii) the losses. This follows from the theory developed in [55,58] establishing the correspondence between orthology relations and event-labeled gene trees. Usually, one is primarily interested in placing duplication and loss events relative to a known gene phylogeny. Although it is not always possible to reconcile event-labeled gene trees with species trees [59], we found that our data were almost always "clean" enough to cause few problems in this respect, because the final ortholog groups contained only very small numbers of locally occurring paralogs. We could therefore use a simple heuristic that corrected the graph structure by deleting or adding edges in such a way that they could be reconciled into a phylogeny. The heuristic iteratively deletes or adds edges in order to edit the structure. At the same time, the number of edges to be edited is kept minimal. Given a species tree S and cluster C of orthologous genes, we let σ(x) ∈ S be the species in which element x ∈ C resides. Thus σ(C) is the set of species in which members of the cluster are attested. The appearance or insertion of C into S occurs within the edge ancestral to the least common ancestor of σ(C) in S. As a consequence, every cluster that is present ancestrally is viewed as an "insertion before the root". Using the same parsimony assumption, we assume that deletions of C appear in the edge ancestral to maximal subtrees S of S below that do not contain species from σ(C). If the species tree is fully resolved, then deletions are never inferred at an edge leading to a child of . If a cluster contains multiple paralogs, duplication events are associated with changes in the copy number. Because clusters are by construction local in the genome, such duplication events correspond to tandem duplications. In contrast, the proliferation of the elements by insertion at different loci is accounted for by the insertion events. A detailed mapping of tandem duplications to the species trees is non-trivial, as the event-labeled gene trees obtained from co-graphs are usually not fully resolved. The pipeline therefore counts only the duplication events that occurred along the lineage leading from the root to a given leaf. This information can be extracted directly from the pairwise alignment of the element orders within each cluster. An example is shown in Figure 4. Pseudogenes and Remolding Events An important pathway to gene loss is pseudogenization, which can in many cases be detected by means of sequence similarity. Pseudogenes are identified on the basis of their sequence similarity to the target elements. If Infernal is used to retrieve a set of target elements, the user can specify a threshold for the Infernal score that will mark an element as a pseudogene. If target elements are given as a table created by the user, the table will include a column specifying whether the element at a given locus is considered a pseudogene or not. In the case of tRNA detection, tRNAscan-SE is used to retrieve a set of target tRNAs. Remolding refers to an evolutionary event that changes the type or subtype of a molecule. The best-known examples are changes of the anti-codons in tRNAs such that the tRNA then refers to a different amino acid [60,61]. Remolding events are determined on the basis of the similarity thresholds for detecting orthologous elements and annotated element types. Hence, given two tRNAs with distinct types but a similarity above the specified threshold, the pair of tRNAs is reported as a remolding event. Conversely, if two elements have the same type but their sequence similarity is below the given threshold, this will be reported. The types of elements can at least in part be retrieved from Infernal or tRNAscan-SE output, which can be used by the user to generate a customized list of target elements. In the case that no type is given, remolding events cannot be reported. By definition, no remolding events can be associated with singleton clusters. Implementation Both parts of the pipeline run fully automatized according to the given input and parameters. Hence, the second part is available in two different versions: a fast version with as few output files as possible, and a slower, verbose version that will print intermediary files such that the user can have a deeper and more detailed look into the data. This includes the formation of clusters and graphs created thereof as well as derived duplication alignments used for counting phylogenetic events. The current version of the pipeline requires the following input data: A MSA of the genomes under consideration is required to extract the synteny anchor points. Currently only Multiz format is supported. 2. The corresponding genomic sequences are required for the annotation of the loci of interest. The pipeline expects fasta format. Because there is no guarantee that genome-wide MSA represents the complete genome, both MSA and genomes must be provided. 3. Target elements can be specified either as user-supplied annotation files or as one or more covariance models for annotation with Infernal or tRNAscan-SE. The modular organization of the pipeline makes it straightforward to add, in future releases, further means of generating annotation information, such as hidden Markov models of proteins. 4. A phylogenetic tree of the species of interest is necessary as a background to which evolutionary events are mapped. The first three data items are required for the construction of the orthology relation. The phylogenetic tree is required only for the second part of the pipeline. There are several parameters that can be adjusted by the user. The most important is the similarity threshold for true orthology candidates. For the showcase examples reported here, we used the same threshold value of 80%. The threshold for low-scoring MAF blocks that are to be discarded from the analysis can also be determined by the user. In addition, the pipeline offers several command-line parameters to only run on subsections of the workflow and to omit some of the intermediate processing steps. For details, we refer to the user manual. The pipeline produces both machine-readable text files containing details of the analysis and condensed representations. The pipeline can also store detailed information on intermediate results that may also be useful in particular as a starting point to explore alternative analysis strategies. The final results include (i) the main results file, a phylogenetic tree displaying the evolutionary events in newick format, as well as auxiliary files for the visualization of the tree and event information using iTOL [62]; (ii) a file listing all gene clusters retrieved from the input data; (iii) a list of all genetic events sorted by event and species; (iv) a list containing the numbers of genetic elements sorted by species and type; and (v) a list containing remolding events. We also used iTOL [62], an interactive online visualization tool, to generate the results tree. Optional intermediate files include (i) the edge-weighted graph of each initial cluster; (ii) a file for each of the clusters specifying which elements are contained in the cluster, including all available annotation information for each element; (iii) the element-wise alignments of each cluster; and (iv) information on the co-graph structure or deviations thereof. Benchmarking with Artifical Data In order to test the functionality and performance of the pipeline, we constructed artificial data sets comprising six species with artificial "genomes" that were initially linked by 10,000 genetic anchors; 100 simulated "genetic elements" subdivided into three distinct types were randomly placed between the anchors. We considered both a random placement of the element and the insertion of elements into homologous positions of all or of a subset of the species. In order to model tandem duplication, furthermore, a fraction of elements were added twice. In order to simulate noise in the genome-wide alignments, a fraction of the anchor blocks were deleted randomly. We considered perfect data as well as a loss of 20% and 40% of the anchor blocks. For each setting, we executed our pipeline and compared the reconstructed orthology assignments and gain/loss statistics to the known ground truth. Automatic Pipeline for Multicopy Elements We have developed a fully automatized pipeline that implements an improved version of the conceptual workflow of [23] for the detailed quantitative analysis of genetic elements that are subject to concerted evolution. It uses synteny information provided by uniquely aligned sequences adjacent to the multicopy elements of interest as the key information to disentangle their evolutionary relationships. The mathematical properties of orthology relations and their equivalents to event-labeled gene trees guide the post-processing of the data. This makes it possible to obtain an accurate and very well resolved picture of the history of multicopy families. In the work of [23], the workflow was not implemented in a coherent piece of software but was left at a conceptual level, requiring each analysis step to be performed in isolation. Here, we describe a fully automatized and publicly available pipeline that not only greatly facilitates the analysis in practice, but also ensures a high degree of reproducibility. For convenience, the pipeline also includes options to automatically generate input annotation data using tRNAscan-SE and Infernal. By including checks for missing data and distinct levels of adjacency constraints, we furthermore improved the accuracy of counting genetic events along the phylogenetic tree. Finally, the output of our pipeline includes files to easily visualize the resulting phylogenetic tree using iTOL, thus facilitating the interpretation of the results. The pipeline, which is written in Python and Perl, is available from https://github.com/ AnneHoffmann/Smore. It requires Infernal and tRNAscan-SE if the user decides to use these tools for the genome annotation step. A user manual provides detailed usage instructions. We additionally include a small example in the repository giving instructions on how to apply the pipeline to data. Input data and output files for all subcommands applied on the small test set are available. The repository also provides the covariance models and the gene lists used in this contribution. As showcase examples, we investigated the evolution of several multicopy ncRNA families. First, we reanalyzed the evolution of tRNAs in two different mammalian data sets, which were comprised of 6 and 10 species. Then we considered the much less widely studied Y RNAs for mammals and nematodes. Application to Artificial Data As described in Section 2.8, artificial data sets were created using distinct levels of noise, hence including perfect data, and a 20% and 40% loss of genomic anchors (see Figure 5). Using perfect data, that is, no deleted blocks, the pipeline exactly reconstructed the ortholog groups. With increasing noise level, the number of singletons decreased and the number of inferred local duplications increased, as loci were joined upon the loss of intervening anchors. With increasing noise level, an increasing fraction of deletion events were classified as missing data. At the same time, we observed an increase in inferred insertions at interior nodes of the tree, owing to a failure to correctly assign an ortholog from an outgroup. Both effects were expected and cannot be addressed at the level of synteny data. In order to counteract this issue, more accurate and complete genome-wide alignments would be necessary. tRNAs The comparison of the 6 and 10 species' data shows an interesting effect: lineage-specific deletions of tRNAs seem to be very frequent in mammals (see Figure 6). Including three additional outgroups substantially increases the number of tRNA loci that predate the ancestor of the Catarrhini. While in the 6 species data set, 206 of the 731 human tRNAs are placed at the ancestral branch, the number increases to 328 in the 10 species set. This is compensated for by a correspondingly larger number of lineage-specific losses in the outgroup species and a reduction of predicted insertion events in the human lineage. Remolding of tRNAs was analyzed for the 10 mammalian species' data. Although the exact numbers depend on the choice of the similarity threshold and the details of the cluster-joining procedure, we recovered most of the remolding events previously described in [22,23]. Detailed data are provided in Supplementary Material S3. As in previous reports, the overwhelming majority of remolding events concern pseudogenes and/or are lineage-specific, and they most likely are the first steps in tRNA pseudogenization. Figure 6. Summary of the evolutionary events inferred for transfer ribonucleic acids (tRNAs) in an evaluation with 6 (A) and 10 (B) species. Insertions and deletions that occur for groups of orthologous elements are inserted at their lowest common ancestor, and possible deletions are added below the interior branches to which they refer. Other events such as singletons and duplications are added directly at the leaves for each species separately. Orthology relations are based on a similarity threshold of 80% sequence similarity, and clusters were joined using the relaxed adjacency constraints. Numbers in parentheses are numbers of pseudogenes. Mammalian Y RNAs Our data suggest that the spread of Y RNA sequences is an ongoing process in mammals. Of the 990 loci identified, 190 date back to the ancestor of the Catarrhini, while on the order of 100 loci have been inserted in both the human and the chimpanzee lineage after their divergence (see Figure 7). The 6 and 10 species' data sets are largely consistent, although the inclusion of an additional member of the Cercopithecinae places many insertions that are estimated to be specific to Hominoidae to Catarrhini. Only a very moderate number of Y RNA loci were already populated in the ancestor of Simiiformes. Cercopithecinae places many insertions that are estimated to be specific to Hominoidae to Catarrhini. 331 Only a very moderate number of Y RNA loci was populated already in the ancestor of Simiiformes. The copy numbers of the Y RNA families are comparable with the data reported in [63]. Within 333 Catarrhini there are consistently more Y1 and Y3 genes than Y4 loci. The number of Y5 copies remains 334 small throughout the clade. Consistent with [63], our data show an appreciably level of synthenic 335 conservation of Y RNA loci also beyond the Y RNA cluster that typical harbors on functional copy of 336 each of the four families [44]. Complete data are provided in Supplement S4. 337 Nematode sbRNAs 338 The stem-bulge RNAs (sbRNAs) were discovered in a systematic screen of a ncRNA-specific full-length 339 cDNA library for C. elegans [64] and a subsequent contribution [65] that listed additional experimentally 340 verified members of this family. A detailed study of their sequence evolution [47] showed that their 341 best-conserved sequence elements are similar to those of vertebrate Y RNAs, leading to the realization 342 that they are in fact the homologs of the Y RNAs in other animal clades. Functional similarities are 343 discussed in [66]. 344 Consistent with [47] we find arrays of tandem-duplicated sbRNAs in most species, see There are, however, no recognizable syntenically conserved orthologs. Given the large evolutionary 346 distances and the high frequency of genome rearrangment [67], it is entirely plausible that this data set 347 Figure 7. Summary of the evolutionary events inferred for Y ribonucleic acids (Y RNAs) in an evaluation with 6 (A) and 10 (B) species. See the caption of Figure 6 for a detailed legend. The main difference between the two data sets is that the inclusion of an additional member of the Cercopithecinae moves a substantial number of the insertion events from Hominoidae to Catarrhini. The copy numbers of the Y RNA families are comparable with the data reported in [63]. Within Catarrhini, there are consistently more Y1 and Y3 genes than Y4 loci. The number of Y5 copies remains small throughout the clade. Consistent with [63], our data show an appreciable level of synthenic conservation of Y RNA loci also beyond the Y RNA cluster that typically harbors a functional copy of each of the four families [44]. Complete data are provided in Supplementary Material S4. Nematode stem-bulge RNAs The stem-bulge RNAs (sbRNAs) were discovered in a systematic screening of a ncRNA-specific full-length complementary DNA (cDNA) library for C. elegans [64] and in a subsequent contribution [65] that listed additional experimentally verified members of this family. A detailed study of their sequence evolution [47] showed that their best-conserved sequence elements are similar to those of vertebrate Y RNAs, leading to the realization that they are in fact the homologs of the Y RNAs in other animal clades. Functional similarities are discussed in [66]. Consistent with [47], we found arrays of tandem-duplicated sbRNAs in most species (see Figure 8). There are, however, no recognizable syntenically conserved orthologs. Given the large evolutionary distances and the high frequency of genome rearrangment [67], it is entirely plausible that this data set was too divergent to be informative for our method. This example draws attention to the limits of the synteny-based approach. Figure 6 for a detailed legend. Discussion and Concluding Remarks The methods of molecular phylogenetics require a strong correlation between sequence similarity and evolutionary divergence times. Because the mechanisms of concerted evolution obliterate this correlation, molecular phylogenetics is not applicable to the analysis of multicopy gene families including tRNAs and many other ancient ncRNA families. We have shown in a previous work that this limitation can be overcome in a systematic manner by using synteny, that is, conservation of relative gene orders, to identify orthologous elements [23]. In this contribution, we now report on the implementation of a computational pipeline that automatizes the corresponding workflow and thus makes synteny-based analysis of gene families available in practice at genome-wide scales. The Synteny Modulator Of Repetitive Elements (SMORE) pipeline, available from https://github. com/AnneHoffmann/Smore, is composed of two parts: The first component is concerned with the determination of orthology groups. The second component of SMORE implements methods for the identification of evolutionary events and their quantitative analysis. We demonstrated the functionality of the pipeline both on artificial data sets and using the analysis of tRNA and Y RNA genes as real-life showcase examples. The results are at least qualitatively consistent with previous studies and extend and refine these considerably. The approach presented here assumes the perfect conservation of gene order in the vicinity of the elements of interest. While this is a very good approximation at smaller evolutionary scales, for example, among primate genomes, there are noticeable violations at larger scales, as exemplified by the example of nematode sbRNAs. At the same time, fewer synteny anchors are available for more distant genomes, because large fractions of the genome are diverged beyond the limits of reliable alignments. As a consequence, anchors are on average separated by larger genomic distances and thus are more likely to be separated by genome rearrangements. It may be possible to include explicit information on gene-order differences, as in a maximum likelihood for gene-order analysis (MLGO) [68] or similar approaches [69][70][71]. A second open problem concerns the exact mapping of the local duplication events to the species tree. On the one hand, the co-graph of a family does not necessarily provide full resolution [55]; on the other hand, the pairwise list alignments of the elements are not necessarily consistent. The reconciliation of pairwise alignments with duplication into a common multiple alignment with duplication is an as-yet-unresolved problem. An alternative approach is to use tools such as OrthoAlign [72], which also include genome rearrangements. The SMORE pipeline sets the state for large-scale quantitative investigations into the evolution of multicopy gene families. In particular, it provides the data required to estimate gain and loss rates and the relative effects of, for example, unequal crossover (which governs local gain and loss), retroposition (leading to insertions at novel loci), and pseudogenization (leading to a loss of function and subsequent gradual disappearance of the element under consideration). This quantitative view is of particular importance for even larger families of repetitive elements.
8,650.6
2017-10-31T00:00:00.000
[ "Biology" ]
The P 48 T germline mutation and polymorphism in the CDKN 2 A gene of patients with melanoma CDKN2A has been implicated as a melanoma susceptibility gene in some kindreds with a family history of this disease. Mutations in CDKN2A may produce an imbalance between functional p16ink4a and cyclin D causing abnormal cell growth. We searched for germline mutations in this gene in 22 patients with clinical criteria of hereditary cancer (early onset, presence of multiple primary melanoma or 1 or more firstor second-degree relatives affected) by secondary structural content prediction, a mutation scanning method that relies on the propensity for single-strand DNA to take on a three-dimensional structure that is highly sequence dependent, and sequencing the samples with alterations in the electrophoretic mobility. The prevalence of CDKN2A mutation in our study was 4.5% (1/22) and there was a correlation between family history and probability of mutation detection. We found the P48T mutation in 1 patient with 2 melanomaaffected relatives. The patient descends from Italian families and this mutation has been reported previously only in Italian families in two independent studies. This leads us to suggest the presence of a mutational “hotspot” within this gene or a founder mutation. We also detected a high prevalence (59.1%) of polymorphisms, mainly alleles 500 C/G (7/31.8%) or 540 C/T (6/27.3%), in the 3' untranslated region of exon 3. This result reinforces the idea that these rare polymorphic alleles have been significantly associated with the risk of developing melanoma. Correspondence The incidence of cutaneous malignant melanoma is increasing all over the world and approximately 10% of melanoma cases are estimated to report a first-or seconddegree relative with melanoma (1).Familial melanoma frequently involves multiple primary melanomas, presents clinically atypical moles, and is diagnosed at a younger age than sporadic cases (2).A locus for this hereditary cancer has been mapped on 9p21, and CDKN2A (p16) is the main candidate gene for melanoma susceptibility.Germline mutations in this gene have been found in some melanoma-prone kindreds (3,4).The CDKN2A gene encodes the cyclin-dependent kinase inhibitor p16 ink4a involved in cell cycle control.This protein prevents the formation of a functional kinase capable of phosphorylating the retinoblastoma protein and thereby inhibits cell cycle progression from the G1 to the S phase (5). The likelihood of finding a mutation in CDKN2A depends on family and population selection, ranging from about 1.5 to 50% (6) and several recurrent mutations in CDKN2A (V59G, G101W, 113insR) that have been described are founder mutations (7)(8)(9).Polymorphisms in CDKN2A have also been described, but their influence on melanoma risk is uncertain.Debniak et al. (10) found a statistically significant positive association of the A148T variant among patients with malignant melanoma.However, they did not find statistically significant overrepresentation of the 500 C/G and the 540 C/T polymorphisms in the Polish melanoma population.Kumar et al. (11) found that the frequency of the same CDKN2A variants at both positions was higher in the melanoma cases, although only the 540 C/T polymorphism was statistically significant.Some intronic mutations also predisposing to melanoma have been described (12). The etiology of cutaneous melanoma is complex, involving both heterogeneous genetic and environmental components.Three melanoma families have been reported to carry mutations in CDK4 (3,13).Gillanders et al. (14) studied 49 Australian families and 33 from other continents with at least three melanoma-affected members without CDKN2A and CDK4 involvement.Their research led to evidence for an additional melanoma susceptibility locus on chromosome 1p22. In order to identify the role of the CDKN2A gene in patients with clinical criteria of hereditary melanoma, we conducted a mutant analysis in 22 Brazilian patients with at least one of these criteria: early onset, presence of multiple primary melanoma, or one or more first-or second-degree relatives affected. This research protocol was submitted to and approved by a National Ethics Committee (Comissão Nacional de Ética em Pesquisa -CONEP) and the patients in the study gave written informed consent to participate.The patients were selected from the Melanoma Outpatient Clinic of the Univer-sity Hospital, School of Medicine of Ribeirão Preto, with a histopathological diagnosis of cutaneous melanoma and at least one of these clinical criteria: age at diagnosis ≤50 years, presence of multiple primary cutaneous melanomas, or 1 or more first-or second-degree affected relatives.Among the 22 patients selected, 9 were the only melanomaaffected member of the family, 10 had 1 affected relative (first-or second-degree), 2 had 2 affected relatives, and 1 patient had 3 melanoma-affected relatives.Thirteen patients were ≤50 years old and 1 patient had multiple primary melanomas (Table 1).Data concerning the level of UV/sunlight exposure of these patients were not available. For secondary structural content prediction (SSCP) analysis 3 µL of PCR products was diluted in 3 µL of denaturing solution (0.015 g bromophenol blue, 0.015 g xylene cyanol, 200 µL 0.5 M EDTA, pH 8.0, and 9.75 mL formamide).The solutions were denatured for 5 min at 94ºC, placed on ice for 2 min and electrophoresed on 8% acrylamide gel at 4ºC.Gels were run at 8 W and 90 V for 8 h and stained with silver nitrate. The samples with abnormal migration bands upon SSCP analysis were sequenced using the ABI Prism BigDye ® Terminator Sequencing Kit on an automated sequencer 377 (Applied Biosystem, Foster City, CA, USA).Patients were initially screened for germline mutations of the entire coding sequence of the CDKN2A gene by SSCP analysis. Fourteen patients showed abnormal migration (1 in exon 1, 1 in exon 2, and 3 and 12 in exon 3).DNA sequence analysis of these patients revealed one missense mutation in 1 patient and three polymorphisms in 13 patients (Table 1). The missense mutation was a C to A transversion in exon 1 at the position 1 of codon 48 in one of the alleles (heterozygosity) which changes the amino acid proline to threonine (P48T) in the p16 ink4a protein (Figure 1). The three polymorphisms were in exons 2 and 3.The alteration in exon 2 was a G to A substitution at codon 148 (A148T).Thirteen patients (59.1%) had 500 C/G (7/31.8%)or 540 C/T polymorphisms (6/27.3%) in the 3' untranslated region (UTR) of exon 3 (Table 1).The present study is the first report of germline alterations in the CDKN2A gene in Brazilian patients with cutaneous melanoma. Mutation analysis in 22 patients showed the P48T mutation in 1 patient (No. 19) and three polymorphic alleles in 13 patients mainly in the 3' UTR of exon 3. The patient carrying the P48T mutation is a man who developed a single primary melanoma at 50 years of age and has 2 firstdegree relatives affected.This mutation has been reported before by Moore et al. ( 16) in an Italian patient with pancreatic carcinoma.Della Torre et al. (17) studied the CDKN2A gene in 15 Italian families and found the P48T mutation in one family that segregates with melanoma.In this study (17) the P48T variant of p16 was found to be functionally impaired in its ability to inhibit the cell cycle progression, suggesting a causal role for this mutation.Mantelli et al. (18) found 1 Italian patient with non-familial multiple primary melanoma with this mutation.Our patient descends from Italian relatives and the detection of this same mutation in independent studies suggests that either there are mutational 'hotspots' within the CDKN2A gene, or the families in these studies are related through ancestry. The prevalence of CDKN2A mutation in our study was 4.5% (1/22).There was a correlation between family history and probability of mutation detection.No mutation was observed when the proband did not have melanoma-affected members (N = 9) or had only one affected relative (N = 10).Similar findings of low CDKN2A mutation rates among families with only two affected indi-viduals have been reported by others (4,19).These results support the view that many familial cases represent clusters of sporadic melanomas among high risk phenotypes or less penetrant susceptibility genes (20).Three probands in our study had two or more affected relatives and one of them had the P48T mutation.The prevalence of mutation in these cases would be 33.3%(1/3). In the present study, we found a high prevalence (59.1% -13/22) of polymorphisms, mainly alleles 500 C/G (7/31.8%)or 540 C/T (6/27.3%), in the 3' UTR of exon 3. Kumar et al. (11) analyzed these polymorphic alleles in 235 controls and found the 500 C/G allele in 11.7% and the 540 C/T allele in 8.5%.The t-test comparing our higher frequency and Kumar et al. (11) controls showed statistical significance (P < 0.05) for both polymorphic variants, i.e., the frequencies of these alleles in the patients with melanoma are higher than we would expected at random.In a review study, Hayward (1) reported that these rare polymorphic alleles have been significantly associated with the risk of developing melanoma.According to this investigator, the mechanism by which these variants could confer a melanoma risk is unknown but it is conceivable that these alleles could exert their effect by altering either the stability of the CDKN2A transcript or the level of CDKN2A transcription; or alternatively, these variants might be in linkage disequilibrium with an as yet unidentified variant directly responsible for increased melanoma susceptibility. Figure 1 . Figure 1.A, Secondary structural content prediction analysis of exon 1 shows abnormal migration in patient 19 (arrow).B, Sequencing analysis shows heterozygosity (C/A, arrow) at position 142. Table 1 . Germline mutations in the CDKN2A gene of 22 Brazilian melanoma patients. Mutation analysis: -indicates not detected by PCR.
2,161.6
2006-02-01T00:00:00.000
[ "Biology" ]
A CAD System for Alzheimer's Disease Classification Using Neuroimaging MRI 2D Slices Developments in medical care have inspired wide interest in the current decade, especially to their services to individuals living prolonged and healthier lives. Alzheimer's disease (AD) is the most chronic neurodegeneration and dementia-causing disorder. Economic expense of treating AD patients is expected to grow. The requirement of developing a computer-aided technique for early AD categorization becomes even more essential. Deep learning (DL) models offer numerous benefits against machine learning tools. Several latest experiments that exploited brain magnetic resonance imaging (MRI) scans and convolutional neural networks (CNN) for AD classification showed promising conclusions. CNN's receptive field aids in the extraction of main recognizable features from these MRI scans. In order to increase classification accuracy, a new adaptive model based on CNN and support vector machines (SVM) is presented in the research, combining both the CNN's capabilities in feature extraction and SVM in classification. The objective of this research is to build a hybrid CNN-SVM model for classifying AD using the MRI ADNI dataset. Experimental results reveal that the hybrid CNN-SVM model outperforms the CNN model alone, with relative improvements of 3.4%, 1.09%, 0.85%, and 2.82% on the testing dataset for AD vs. cognitive normal (CN), CN vs. mild cognitive impairment (MCI), AD vs. MCI, and CN vs. MCI vs. AD, respectively. Finally, the proposed approach has been further experimented on OASIS dataset leading to accuracy of 86.2%. Introduction Healthcare problems are by far the most widely discussed subjects in the worldwide, so both healthcare providers and academics are working constantly to advance clinical diagnosis, therapies, and assessments aimed at saving sufferer's lives and improve healthy living. AD is one of the medical disorders that are posing a threat to human health [1]. AD and cerebrovascular disorder are two major types of dementia. Dementia is a neurological brains condition defined by continual declining cognitive abilities [2]. For the moment, there is no cure for AD, and it is considered destructive to individual life and vitality. Many individuals all across the world have been impacted by it. According to the statistics in [3], AD is the 6th leading cause of mortality in the United States, and the 5th leading cause of death in older adults above the age of 65. Although several other causes of mortality have all been falling, fatality rate from AD has been tremendously rising. Around 2000 to 2006, fatality from heart complications lowered by roughly 12%, blood clot mortality declined by 18%, and prostatic cancer-related casualties dropped by 14%; however, mortality from AD rose by 47%. On the other hand, lives lost from AD grew by 89% as per the reports in [4]. According to estimates regarding AD in 2019 Alzheimer's disease facts and figures, the number of individuals lost from AD will be quadrupled by 2050.The exact count of deaths caused by AD is probably much higher than documented on official records. An approximately 18.5 billion hours of assistance were given to persons having AD or other brain disorders by more than 16 million close relatives in 2018 [5]. AD-related neuroanatomical biomarkers are researched several years prior clinical features of cognitive problems, meaning that AD progression might well be identified in vivo biomarker analysis [6,7]. Biomarkers include positron emission tomography (PET), MRI, and blood or cerebrospinal fluid. MRI is widely employed in the identification and diagnosis of AD. MRI scans have numerous benefits over the comparative techniques. For instance, it does not utilize radiation exposure and therefore is noninvasive, cheaper, and even more readily available in clinical settings. Furthermore, MRI indicators could collect heterogeneous information during the same imaging sessions [8]. The visual examination, on the other hand, is prone to human visual constraints and several other factors such as judgment and the clinician's expertise [9]. Furthermore, the health sector acts as a set of independent units, such as clinics, industrial, and healthcare departments. As a result, increased data exchange across such institutions is required to better understand the symptoms, any new evolutions, and the related test findings [10][11][12]. On the other hand, various machine learning (ML) algorithms applied to structural MRI have been employed in past studies to classify AD individuals against to normal healthy people. One of most widely used ML approach is the support vector machine (SVM) [13]. This technique extricates high-dimensional, meaningful features from MRI scans to develop classifier which automates the diagnosis of AD. ML classification comprises of two main steps, namely, feature engineering (feature extrication+feature selection and then dimensionality reduction), and based on those features, lastly classification takes place. Such approach has several limitations as it demands extensive data preprocessing, which requires a lot of time and involves massive mathematics [14,15]. Additionally, the scalability from these techniques is regarded as a crucial problem [16]. DL techniques have significant benefits over traditional ML approaches [17]. Moreover, neural networks are being used in artificial intelligence systems [18]. For instance, such techniques need not involve image preprocessing and therefore can acquire appropriate features from raw imaging data without human intervention. One such result in methods, those are less labor intensive, unbiased, and highly objective. DL methods as previously discussed are ideally suited to managing large, high-dimensional medical image processing. As per experimental research, CNN, a DL technique, outperforms conventional ML algorithms. AD is indeed an incurable neurological disorder that causes gradual mental decline, often in the elderly. The purpose of this research is to have a better understanding of how AD progresses by identifying/detecting brain areas that degrade together during AD, and we can gain a better understanding of how the illness proceeds over the course of the a patient's life. The goal is to not only achieve diagnostic accuracy but also to provide relevant medical evidence. Thus, the primary objective of this work is to classify the degree of disease of the brain that undergoes neuronal degeneration concurrently with AD utilizing DL models and other ML algorithms. Consequently, hybrid techniques can indeed be developed by integrating the various strategies to enhance the system [19]. In this paper, a computer-aided design (CAD) system for AD classification has been made utilizing a CNN-SVM approach which is suggested as an improvised approach over the CNN model alone. The proposed CNN-SVM model includes convolutional layers (along with one additional fully connected layer) for extraction of features and then for classification SVM is used rather than the softmax or Sigmoid layers. Compared with traditional methods, the new method using hybrid CNN-SVM is directly driven by the data. Therefore, the proposed hybrid model in case of 2D MRI scans can also realize selfstudy of expression relations, which is considered an excellent for data representation of images. Further, CNN can independently learn and extract each local feature of data through multilayer convolution and pooling operations and obtain more effective abstract feature mapping than explicit feature extraction [20] methods. Additionally, SVM can help automatically learn the hierarchical feature representation of images which can be utilized based on the deep structure for effective binary as well as multiclass classification leading to reduction in the error rate of AD recognition. Thus, the improved system is divided into the tasks listed below: (i) Conversion from NIfTI to 2D slices (ii) Selection of middle slices out of extracted slices for each subject (iii) Proposing of an enhanced CNN-SVM approach for extracting significant features and then classifying them The developed CNN-SVM approach is then tested out versus the experimental end-to-end CNN model. As a consequence, utilizing SVM as a classifier at the end outperformed in comparison to utilizing softmax or Sigmoid function for classification. The remaining part of the paper is structured as follows: Section 2 comprises a related work for AD classification, whereas Section 3 explores the theoretical framework of the CNN and SVM. Furthermore, Section 4 presents the details about dataset acquisition along with its preprocessing, as well as methodology adopted with its performance. Section 5 outlines the research's conclusions and future scope. Related Work AD is a persistent and irreparable brains degenerative illness [21] that affects cognitive decline, depressive symptoms, linguistic confusion, decision-making process, and mental disability [22][23][24]. This disease also causes anatomical structures such as the hippocampus responsible for long-term memory and the cerebral cortex to shrink, while the ventricles in the brain expand. A healthcare professional can visualize disease progression based on these characteristics utilizing neuroimages of patients in the late stages of AD. Moreover, the intensity of each of these alterations in the nervous system varies with the severity of the disease, especially dramatic contraction of the hippocampi and cerebral cortex and ventricular enlargement visible clearly on 2 Computational and Mathematical Methods in Medicine neuroimaging at the final stages of the disease [25]. Thus, they suffer in the early stages of the disease often referred to as MCI [26], while not all MCI patients move to AD. MCI is a transitory phase from normal to AD where the person experiences minor changes in behavior that have been observable to the afflicted individuals along with close relatives. In such scenarios, the transition phase varies from around six months to three years, while one and a half year is the most usual. As a result, MCI participants usually split into 2 groups: convertible MCI and nonconvertible MCI [27]. Unfortunately, the underlying etiology of AD is still obscure to healthcare experts, and also no recognized treatments or remedies have been shown to avoid or reverse the development of AD [28]. Some of the ML and DL(CAD) based AD classification techniques will be discussed further below. Scientists recently developed a variety of CAD diagnostic methods to support in disease diagnosis. As from 1970s till 1990s, experts established rule-based intelligent algorithms and afterwards supervised models. To build supervised algorithms, features were extracted from the clinical image data [29]. In view of the complex features of brain images, the researcher group of Han et al. in [30] proposed a DL-based methodology named as HCSAE (hierarchical convolutional sparse autoencoder) treated various CSAEs in an unsupervised hierarchy mechanism. The CSAEs retrieved the key aspects of the input utilizing the SAE and compiled the input data in a convolutional way that further enabled to derive impactful and accurate features and preserve plentiful complete details for brain imaging identification. Brain imaging fMRI data were used to validate their method, which demonstrated significant capability when compared to standard classifiers. The authors in [31] effectively differentiated AD fMRI data from healthy controls (HC) employing CNN and the well-known model LeNet-5. Additionally, they employed the LeNet model from Caffe DIGITS 0.2, which is inspired by Deep CNN. In their architecture, they deployed 2-CL layers along with a max-pooling layer after each CL. In the classification of AD vs. HC, the model obtained 96.9% overall accuracy. The experiment revealed that using CNN to capture stable features followed by DL classification has been the effective method for distinguishing diseased data from HC in fMRI. In [9], Gupta et al. evaluated filters or core retrieval using a sparse auto encoder. The authors evaluated it on different types of data: (a) MRI data and (b) natural images. Following the training of 100 bases which indicated lesions in MRI data, researchers deployed 2D convolutions to the MRI data. The Sigmoid activation function was then applied to derive feature activations. Subsampling max pooling was used to lessen the dimensions. Payan and Montana adopted the similar strategy to train a sparse autoencoder for feature extraction and then employed CNN on those learnt features. Similar to Gupta et al.'s model, the convolution layers were followed by subsampling pooling, a FC layer, and a softmax output layer with three outputs according to the class probabilities [32]. The researchers in [33] suggested the use of CNN in the diagnosis of AD, HC, and MCI. The authors particularly optimized VGGNet-16 for ternary classification of AD, MCI, and HC employing the AD Neuroimaging Initiative (ADNI) dataset. In contrast to other classifiers, they attained an overall accuracy of 92 percent. Another group of researchers in [34] recommended the AD classification model using a deep 3D CNN, which could also identify patterns and features identifying AD signs and adapting to varied application datasets. 3D-CNN was based on a 3D convolutional autoencoder which had previously been trained to detect structural form variations in sMRI. 3D CNN's top FC layers were then fine-tuned for each targetspecific AD classification task. Research on the CADDementia MRI dataset without skull-stripping preprocessing revealed that 3D-CNN outperformed various classical models in terms of performance. The ADNI dataset was used to validate their model. The authors of [35] presented implementing a cascaded 3D-CNN in hierarchical manner to acquire nonlinear image characteristics that were ensemble for AD classification leveraging PET images of the brain. Initially, various deep 3D-CNNs are built on distinct local input image patches in ability to turn the local images into further concise strong features. Next, for classification purpose, a deep 3D CNN was developed to combine the high-level features. The proposed methodology enabled automatically grasping generalized features for identification given PET scans. Preprocessing the PET scans did not need any kind of registration and segmentation. Table 1 illustrates the related studies on AD classification using various approaches. Another group of researchers [43] attempted to address the challenge with a small set of medical data utilizing transfer learning, in which cutting-edge frameworks like VGG-16 and Inception-V4 (initialized with pretrained weights from the ImageNet dataset), and the FC layer was retrained with just a limited quantity of OASIS MRI neuroscans. Image entropy was also used to extract the most informative slices. Researchers proved the OASIS MRI dataset with training sizes nearly ten times lesser than the state-of-the-art, equivalent, or indeed higher accuracy could be obtained than existing DL-based techniques. Recently, DL models have been successfully applied to the Alzheimer's dataset to identify HC from other classes (MCI, cMCI, ncMCI, and AD). Theoretical Background 3.1. Convolutional Neural Network. During the last decade, CNN has achieved ground breaking findings in a wide range of domains such as pattern recognition domains, from computer vision to speech classification [44,45]. One of most advantageous element of CNNs is that they result in fewer of parameters in ANN. This success has motivated many academicians and research group authors to seek larger models required to perform complicated situations that were previously impossible with traditional ANNs; the far more significant hypothesis concerning CNN-solved issues is that they should not exhibit spatially dependent features [46]. In disease detection using medical data [47][48][49], researchers not need to concern regarding where the main features are in the image. Furthermore, the system is capable of capturing spatial-spectral correlations in MRI neuroscans [50], particularly if there is an occurrence of three-dimensional and 2-dimensional referring to medical image, via a process of minimization and potential optimization settings [51]. A standard CNN comprises of three primary layers: the convolutional layer (CL), the subpooling layer, and the fully connected (FC) layer [8,[52][53][54], illustrated in Figure 1. CNN's basic building component is the convolution operation. This performs the most of the computing load of the system. The layer computes the dot product of 2 matrices, one of which is the set of trainable parameters matrix known as a kernel or a filter and the other is the fraction of the input image. The product after convolution operation is thus evaluated as a 2D matrix, only with ultimate aim of every feature getting correlated to the summation of the elements of the kernel, and the image's sub cube. So, for an input with size N * N * D, where the N * N is the height and width of image, D is the number of filters having spatial size of F, padding P, and stride S; designers can calculate the dimension of the output image using the below Afterwards, the pool operation replaces network output at specific places by generating a summarized score of adjacent outputs. One such contributes in minimizing the dimension of the feature or activations maps and therefore lowers the cost of calculation and weight involved. The pool operation is performed on each sliced of the representation separately. After the pooling operation, the size of image reduces as per 3.2. Support Vector Machine. SVMs are a fundamental aspect in learning concept. Algorithms are highly effective for a variety of tasks in engineering and science, notably classification concerns [55]. Inspired by Fisher's [56] classification techniques for splitting information, Boser et al. [57] proposed SVM polynomial kernel. SVM is the focus of considerable research since then, including deployments to a variety of relevant tasks, numerous modifications on the previous design, and some conceptual study. SVM tries to depict multidimensional data in a region partitioned by The researchers implemented a HadNet DL model to develop a classification system for MRI neuro-scans which was founded on a 3D CNN. The HadNet architecture's foundation comprised layered convolutions (inception methodology), that enabled additional internal features of the MRI scans relevant to AD. Additionally, HadNet's hyperparameters were fine-tuned using the Bayesian optimization procedure. [37] 2019 3D-CNN Researchers revealed numerous approaches for improving the performance of 3D CNN trained on sMRI neuroimaging dataset to identify classify AD. Authors further proved that instance normalization outperformed batch normalization, initial spatially downsampling reduced accuracy, broadening the framework provided stable improvements whereas extending depths did not, and finally including age as a feature input offered minor gain in performance. [38] 2020 CNN-RNN-LSTM based The authors concentrated on developing the three core models, which included CNN, long short-term memory (LSTM), and recurrent neural networks (RNN), and long short-term memory (LSTM) in the initial phase. The ensemble approach was then applied in the next step to integrate all three models adopting a weighted mean strategy. Bagging was applied in all three approaches to reduce variability. Thus, three bagged models were integrated with the ensemble technique. [39] 2021 VGG, ResNet-50, AlexNet This study is aimed at identifying MRIs of AD patients into several classes via various transfer learning models such as VGG16, ResNet-50, and AlexNet, along with CNN. [40] 2020 3D ResNet-18 A technique by using transfer learning in 3D CNNs that enables learning to be transferred from 2D image datasets to 3D image datasets was suggested by the authors. [41] 2021 2D-CNN With parameter optimization a 2D-CNN was employed to assess architectural impact in improving the diagnostic accuracy of four classes of images-mild, very mild, moderate, and nondemented considering AD. [42] 2022 Sliding window association test-(SWAT-) CNN SWAT-CNN: a three-step approach presented by researchers for detecting biological variants that leverages DL technique to determine phenotypic expression single-nucleotide polymorphisms that may be utilized to build appropriate AD classifier. 4 Computational and Mathematical Methods in Medicine a hyperplane which isolates data elements into distinct classes. On new unseen data, the SVM as a classifier can reduce the classification error. SVM has been proven to be efficient for binary classification but inadequate for outlier noisy data. Methodology and Implementation 4.1. Dataset Acquisition and Preprocessing. The dataset ADNI was included in this study. A list of all ADNI investigations may be found at http://adni.loni.usc.edu/wpcontent/uploads/how to apply/ADNI Acknowledgement List.pdf [58]. This dataset was started in 2004 through National Institute on Aging (NIA), National Institute of Biomedical Imaging and Bioengineering (NIBIB) grants, and a variety of pharmaceutical industries and organizations. The prime focus of ADNI was to follow the status of early AD and MCI that used a blend of clinical and neuropsychological measures, MRI, fMRI, PET, and related biomarkers. The dataset accumulating document is maintained on the ADNI portal [59] which itself is directed by Michael W. Weiner, MD. The dataset includes 50 participants from each of the three classes: CN, MCI, and AD as illustrated in Table 2. The 80% of the subjects were used for training and rest 20% utilized for testing the model. Each participant underwent approximately 3-4 MRI scans over the period of time. The dataset originally downloaded from site was available in the NIfTI format (3 dimensional). Firstly, we adopted Algorithm 1 as discussed below to convert an NIfTI extension files to 2D images (png format) since training a 3D CNN utilizing NIfTI files requires a long time and is relatively costly [60]. The count of recovered images (after conversion) corresponded to each of the single MRI scan was 256. Just the innermost 66 slices from a total of 256 were studied; the remaining (extreme side) was not considered since these exhibited no valuable features. Sample images extracted from NIfTI after conversion are shown in Figure 2. Figure 3 depicts the workflow for the proposed hybrid CNN-SVM architecture for AD classification which is divided into two stages: data collection and splitting of dataset, feature extraction and classification. Figure 4 depicts the usage of CNN to extract features and SVM as a classifier on those derived features. This study's CNN architecture comprises of four Computational and Mathematical Methods in Medicine optimizer, learning rate 0.0020, 128 dense units, and the ReLU activation function was adopted in all CL. Just after feature extraction through CNN processes are completed, the SVM classifier is employed to classify ADNI images. SVM classifier training was carried out with feature maps encoded in matrix format. The training results were used to evaluate the ADNI test data. In fact, the automatically derived features from the CNN network were for-warded to the SVM component for training and testing on the ADNI dataset. The ADNI testing data is similarly preprocessed before being applied to test the classifier. Table 3 demonstrates the overall impact of SVM as a classifier at the end versus end-to-end CNN for feature extraction and classification for train and test performance. Table 2 indicates that integrating SVM as a classifier on the derived features of CNN outperforms using CNN Table 4: Comparative analysis of the proposed approach with previously proposed state-of-the-art classification systems. References Year of reference The research team used scraped pretrained or trained AlexNet CNN as a generalized feature representation of a 2D MRI neuroimaging, wherein dimensionality was compressed through PCA+TSNE before classification using a basic ML technique. [64] 2019 ADNI Six different ML and data mining methods have been applied to the ADNI dataset in classifying the five distinct phases of the AD and determine one of most unique feature for each AD's phase. The investigators applied unsupervised learning focused on CAE to address classification challenge for AD/NC and supervised pretrained models to tackle the pMCI/sMCI classification task. A gradient-based visualization technique which resembles the temporal impact of the CNN designer's choice has been implemented to find the most relevant biomarkers associated to pMCI and AD. In the case of binary classification, the accuracy of CN vs. MCI in the training sets is 83.71 percent and 85.2 percent, respectively, and for AD vs. MCI is 84.23 percent and 84.9 percent, which is lower than the accuracy of AD vs. CN. Furthermore, a significant comparison can be seen between classifications consisting of AD vs. MCI and CN vs. MCI with AD vs. CN, as it is harder to identify the early phase (i.e., MCI) from CN and AD. Comparative Analysis with State-of-the-Art Datasets and Technologies. The availability of sufficient resources, as well as an imaging dataset, is critical to the creation of an AD classification system. However, in real-world applications, improved research in AD classification is now leading to a greater use of hybrid modeling approaches that are capable of achieving self-study of expressive correlations, which is regarded as an ideal method for visual data representation. The use of CNN for effective classification of MRI scans is similar to the more ordinary neural networks in that they are made up of hidden layers consisting of neurons with learnable parameters [23]. However, the earlier proposed methodologies by the researchers clearly lags automatically learning the hierarchical feature representation of images which otherwise can be utilized based on the deep structure for effective binary as well as multiclass classification [28,44,61]. Table 4 outlays a state-of-the-art comparison of diverse datasets and modeling methodologies, allowing for a relevant assessment of DL, transfer learning, and hybrid learning effectiveness. Contribution of the Proposed Work. In addition to practical implications, the present study contributed to existing literature regarding AD. This study also contributed to the understanding of what kind of biomarkers could be utilized for AD and various techniques for classification of AD can be effectively utilized. The analysis of this study added to existing research by identifying a novel hybrid approachbased learning of the features that should be considered in early stages of the innovation process, compared to traditional methodologies of utilizing neural network which seems to be less important. This research further confirmed results of existing studies that emphasized the importance of modeling in detection of neurological diseases mainly AD and its availability of relevant resources [22,29,49,62]. However, the present study also helped in the improved classification of the AD in comparison to our previously performed experimentations which in real-life world could contribute to an existing innovation and technology transfer literature and in biotechnology-focused studies. Moreover, the study contributed to prior theory by applying, validating, and extending a model for detection of AD. The results showed the improved performance of the multiclassification system using hybrid modeling CNN-SVM approach. Further, to strengthen the results, the study has been further experimented on the OASIS dataset keeping in mind the consideration of relevant biomarkers as well as applied a hybrid method approach. The management of AD in early stages process depends on the context and should be considered accordingly. In addition, existing research often utilized a standalone ML or DL technique for AD classification, hybrid modeling studies such that the present research contributed by linking two different methodologies (CNN and SVM). The study contributed to existing research by through hierarchical feature representation of images via CNN that can be well utilized for effective binary as well as multiclass classification through SVM which was often not done in prior studies in the domain of biomedical engineering and technology transfer. Conclusion. AD is a progressive neurological condition in which brain cell loss causes in significant mental deterioration. It has been the most prominent type of dementia and also has a severely destructive influence on both the personal and sociocultural activities of individuals. Timely recognition of AD permits the sufferer to procure the optimal feasible medication. Various experts are investigating upon this issue, and yet many ways of recognizing AD have already been proposed. In this research, a CNN-SVM hybrid model for AD classification is proposed, which integrates automated feature extraction with CNN and classification with SVM. In order to identify AD, the network incorporates the strengths of CNN and SVM classifiers. The approach additionally prefers the adoption of automatically generated features versus hand-engineered features. Study results on the ADNI data for 50 subjects for each category CN, MCI, and AD suggested that a hybrid CNN-SVM using CNN for feature extraction and SVM for classification achieved a high accuracy for AD vs. CN binary classification with a RI of 3.4 percent for the testing dataset. The binary classification CN vs. MCI accuracy during the training set is 83.71 percent and 85.2 percent for CNN and Hybrid CNN-SVM model, respectively, while the accuracy of AD vs. MCI is 84.23 percent and 84.9 percent, which is lower than the accuracy of AD vs. CN. Additionally, a significant difference could be noticed between categories consisting of AD vs. MCI and CN vs. MCI with AD vs. CN, since it is more difficult to distinguish the early phase (i.e., MCI) from CN and AD. 5. 3. Future Scope. The proposed hybrid modeling technique still has significant flaws. To begin, optimizing the parameters of the CNN alongside the implementation of SVM, such as the number of hidden layers, the size, and the number of 8 Computational and Mathematical Methods in Medicine kernels for each layer, is a difficult yet time-consuming operation. Furthermore, the proposed method's learnt characteristics lack adequate clinical information for visualization and interpretation of neurodegenerative disease AD. Nonetheless, in the near future, the aforementioned shortcomings will be overcome by configuring the CNN parameters based on optimum selection methodologies, employing an optimum kernel sizes, and effectively including the clinical features. Moreover, the authors chose a single MRI neuroimaging modality in this study since the integration of several other data modalities can give a comprehensive perspective of AD staging evaluation even further improve the model's performance. As a result, in the future, additional modalities, like as DTI and PET, might be employed in addition with MRI brain scans to identify AD and CN. Future applications of the suggested hybrid CNN-SVM include disease classification such as lung cancer, brain cancer detection, and autism detection. Data Availability The data that support the findings of this investigation are available from ADNI (http://adni.loni.usc.edu); however, they are subject to restrictions because they were utilized under permissions for this work and are therefore not publicly available. The authors' data are, however, available upon reasonable request and with ADNI's approval.
6,718.6
2022-08-09T00:00:00.000
[ "Computer Science" ]
Effect of Cold Work on Creep Rupture Strength of Alloy263 Creep rupture strength and the microstructure change during creep deformation of pre-strained Alloy263 were investigated. Creep rupture tests were conducted at 1023, 1073 K at stress range from 120 to 250 MPa. Creep strength of the pre-strained samples was higher than that of the non-strained samples. However, rupture strain of the pre-strained samples was much lower than that of non-strained samples. In the pre-strained samples, Ni3(Al, Ti)-γ’ and M23C6 inside of the grains precipitated finer than the non-strained samples compared at the same creep time. At grain boundaries, the grain boundary shielding ratio covered by M23C6 carbide showed almost same value among each samples. However, diameter of the M23C6 particle decreased in pre-strained samples. Furthermore, dynamic recrystallization was promoted and precipitation free zone (PFZ) was formed around Ni3Ti-η phase at grain boundary. These observations show that increase in creep-strength of pre-strained sample was due to increase in precipitation strengthening in the grain by fine precipitation of γ’ and M23C6. In addition, resistance against crack propagation at grain boundary increased by the fine precipitation of grain boundary M23C6 even though formation of PFZ and promotion of the dynamic recrystallization. It is estimated that Orowan-stress of pre-strained samples was 1.7 times higher than non-strained samples. It is considered that these strengthening effects overcome the weakening effects in the pre-strained samples. Introduction Currently, higher thermal efficiency in the fossil power plants is required with a view to cover increasing power demand and to reduce an amount of greenhouse effect gas emission [1]. In order to improve thermal efficiency in a fossil power plant, 973 K class Advanced Ultra Super Critical (A-USC) power plant has been developed, whose steam temperature and pressure of the boiler have increased from 873 K class USC power plant [2~4]. The thermal efficiency of power generation will be expected to improve in A-USC power plant. For the boiler tube material of A-USC power plant, excellent creep rupture strength and good corrosion resistant properties are required compared to the existing ferritic heat resistant steels. Several Ni-based alloys have been investigated as hopeful candidate materials for A-USC boiler tube. Especially, Alloy263 shows superior high temperature mechanical property [5,6]. Normally, plastic strain has been applied to boiler tube materials by cold work during the fabrication process. It is important to investigate the effect of plastic strain by cold work on creep rupture strength for reasons of carrying out the heat treatment after fabrication process and the long term creep life prediction of boiler tube material. Several works researched the effect of plastic strain on high temperature strength in Ni-based alloys for boiler tube materials, such as Alloy617, Alloy740 and and HR6W. It was reported that the effect of plastic strain on creep rupture strength is significantly different in each materials [6~11]. However, there are few reports that showed the effect of plastic strain on creep strength of Alloy263. In this paper, creep strength and microstructure change during creep deformation of Alloy263 pre-strained up to 30% were investigated to reveal the effect of pre-strain on the creep property of Alloy263. Experimental Procedures Alloy263 was prepared from solution-treated tube at 1423 K / 2 h WQ with outer radius of 38 mm and inner radius of 8.8 mm. Chemical composition of Alloy263 is shown in table 1. Pre-strain was introduced by tensile test up to 30% at room temperature. Creep test specimens with a diameter of 6 mm and gauge length of 30 mm were obtained parallel to the longitudinal direction of the tube. Creep tests were conducted at temperatures of 1023~1073 K and stress range of 140~100 MPa. In the several pre-strained specimens, creep tests were interrupted at the rupture time of non-strained sample. Microstructure observation was conducted in the parallel portion of the specimen by using optical microscope (OM), scanning electron microscope (SEM), scanning transmission electron microscope (STEM). Area fraction of grain boundaries covered by precipitates  was calculated by following equation [11]. Where l i and L are the length of grain boundaries covered by precipitates and total grain boundary length, respectively. ρ was calculated from more than 5 secondary electron images (SEIs) with magnification of 2,000x. Average particle size of Ni 3 (Al, Ti)-γ' and M 23 C 6 carbide inside of the grains was calculated by the diameter from TEM images taken at 20kx from more than 3 areas. Average particle size of grain boundary M 23 C 6 carbide was calculated from back scattered electron images (BEIs) taken at 2kx from more than 3 areas. Samples were electron-etched using 10% oxalic-acid reagent. Orientation analysis was conducted by OIM (Orientation Imaging Microscopy) analysis at acceleration voltage of 20 kV, work distance of 15 and step size of 0.25 µm. Figure 2 shows the dependence of stress on the Larson-Miller parameter (C=20) of non-strained and pre-strained samples at 1023, 1073 K. Creep rupture life of pre-strained samples increased by 1.3 times at higher stress level. They showed similar behavior at 1023 K. Figure 3 shows the creep rate -time curves of the non-strained and the pre-strained samples at 1073 K at 140 MPa. The arrow on the creep rate-time curve represents the point of interruption in creep tests of the pre-strained sample. The point of interruption of creep in the pre-strained sample is equivalent to rupture time of non-strained sample. In the transient creep stage, the strain rate decreased rapidly in both samples since then they shifted to acceleration creep without a steady state creep. There was few effect of work hardening on transition creep behavior in pre-strained sample. Non-strained sample reached the minimum creep rate of 1.0×10 -5 s -1 whereas pre-strained sample reached the minimum creep rate of 7.2×10 -6 s -1 that was 1.4 times lower than non-strained sample consuming more than 5 times longer time. Figure 4 shows the creep strain-creep rate curves of non-strained and pre-strained samples at 1073 K at 140 MPa. Strain at minimum creep rate of pre-strained sample was 0.2% that was larger as twice as non-strained sample. However, accumulated creep strain of pre-strained sample after the acceleration creep was quiet few. In addition, rupture strain of pre-strained sample was only 4%, which was significantly lower than non-strained sample of 40%. Figure 5 shows OM images of non-strained and pre-strained samples creep ruptured at 1073 K at 140 MPa taken from near rupture area ( Figure 5 (a, b)) and central gauge portion of the specimens ( Figure 5 (c, d)). In the non-strained sample, cracks concentrated near the rupture area and few cracks were observed at the central gauge portion (Figure 5 (a, c)). On the other hand, many cracks were observed at not only rupture area but also grain boundary area of the central gauge portion in the pre-strained sample ( Figure 5 (b, d)). Figure 6 shows SEIs of non-strained and pre-strained samples creep ruptured at 1073 K at 140 MPa. In the grain, γ' particles size was about 100 nm in the pre-strained sample and 200 nm in the non-strained sample. Despite longer rupture time in pre-strained sample its γ' particles size was finer than the non-strained sample ( Figure 6 (c, d)). Plate-like η phase precipitated inside of the grains and at grain boundary in both samples. At grain boundary, massive M 23 C 6 carbides were observed. In addition, PFZ with width of few µm was formed around grain boundary η phase in both samples. Effect of Pre-Strain on Microstructure Inside of the Grain During Creep Deformation It is difficult to simply compare the microstructures of pre-strained and non-strained samples shown in Figure 5~6 because creep rupture time in the pre-strained sample was 1.4 times longer than the non-strained sample. In order to compare the microstructure at the same creep time, creep test was interrupted at rupture time of the non-strained sample at 1073 K at 140 MPa in the pre-strained sample indicated by arrow in Figure 3. Figure 7 shows bright field images and STEM/EDS point analysis results of γ' in non-strained (creep ruptured) and pre-strained (creep interrupted) samples creep deformed at the same time at 1073 K at 140 MPa. Average diameter of γ' particles in the pre-strained sample was about 150 nm that was much smaller than 200 nm in the non-strained sample. In the non-strained sample, short dislocation, its length was under 100 nm, distributed uniformly in the grain. In the pre-strained sample, γ' particle distance was shorter and dislocations with about 200 nm length were observed. Its dislocation density was higher than the non-strained sample. STEM/EDS mapping results in these samples were shown in Figure 8. It was clear that content of Ti in γ' was much higher than Al in both samples. It explained the reason of formation of PFZ at grain boundary around η phase in the crept samples shown in Figure 6. W. Wen-juan et. al reported that η phase precipitation reduced volume fraction of γ' phase because η phase precipitation and growth were due to diffusion of Ti that degrades creep strength of Ni-Co-Cr alloy [13]. It is also reported that intraglanuer precipitation of η phase deteriorate mechanical property, especially strain [14]. However, in the pre-strained sample, many Cr-rich M 23 C 6 carbides were observed inside of the grains. The volume fraction of M 23 C 6 carbides inside of the grains in the pre-strained sample was much higher than non-strained sample. Their particle sizes in the pre-strained sample were about 62 nm that was much smaller than the non-strained sample of about 167 nm. Nakazawa reported that fine Cr-rich M 23 C 6 carbide precipitated on dislocation in 20% of cold worked 304 steel. Creep resistance in cold worked sample increased and creep rupture strain deteriorated due to increment in intragranuler strength by fine precipitation of intragranuler M 23 C 6 [15] The effect of precipitates inside of the grain on creep deformation was discussed by Orowan stress in several works. R. C. Reed et al observed γ' particle and dislocations in strained U720Li, Ni-based superalloy [16]. They calculated critical resolved shear stress (CRSS) necessary to move two coupled edge dislocations in the <110> direction on the {111} plane through γ' particle. They revealed that when γ' particle size was over 50 nm, strongly coupled dislocation pair would be activated. In the present study, CRSS necessary to glide γ' particle in pre-strained sample with particle size of 150 nm was calculated to be 1.5 times larger than non-strained sample with particle size of 200 nm at 973 K [16]. It is considered that intragranular strength in pre-strained Alloy263 will increase by fine precipitation of γ' and M 23 C 6 carbide. From these results, Orowan stress in Alloy263 was calculated using following equation. b µ τ λ = where µ is modulus of rigidity, b is a length of burgers vector, λ is a particle distance of γ' phase [17]. When we took into account of only γ' particle, Orowan stress in the pre-strained sample was larger by 1.08 times, which was almost same to the non-strained sample. However, when we took into account of grain interior M 23 C 6 particles in addition to γ', Orowan stress in pre-strained sample was increased by 1.7 times in the non-strained sample. It is considered that creep strength in the pre-strained samples increased due to fine precipitation of M 23 C 6 carbides and γ' in the grain. Effect of Pre-Strain on Grain Boundary Microstructure During Creep Deformation So far, several works about creep deformation mechanism for boiler tube material have been conducted. Takeyama reported that grain boundary α 2 -W phase suppressed grain boundary deformation during creep in Ni-20Cr-Nb-W alloy. They advocated this strength mechanism as grain boundary precipitation strengthening mechanism [18]. Matsuo et al. found out this creep strengthening mechanism observed in Nickel based alloy Nimonic 80A by grain boundary γ' phase [19]. Recently, this mechanism was applied to boiler tube alloy for A-USC plant. It is reported that grain boundary Laves phase increase creep rupture life due to suppression of local deformation near grain boundary [20,21]. In order to consider effect of grain boundary precipitates on creep deformation near grain boundary, area fraction ρ of grain boundary covered by precipitates was calcurated in both the non-strained and the pre-strained sample crept at 1073 K at 140 MPa. ρ showed almost same value for 64% in non-strained sample and 67% in pre-strained sample respectively. It is considered that improvement of creep property by grain boundary precipitation strength was limited at present creep condition. Figure 9 shows Kernel average misorientation (KAM) maps near grain boundary creep interrupted and ruptured at 1073 K at 140 MPa in the non-strained and pre-strained samples. In the non-strained sample, a lot of small angle boundaries with under 5 degree of misorientation formed near grain boundary which represents that formation of sub-boundary was processing. On the other hand, recrystallization has been finished in the creep interrupted pre-strained sample. This results shows recrystallization has been promoted by pre-strain. Recrystallization occurs by migration of initial grain boundary which indicates discontinuous dynamic recrystallization (DDRX) was under formation [22][23][24]. Several micro-cracks were observed at grain boundaries where recrystallization has not occur in creep ruptured sample. These results show dynamic recrystallization was promoted by pre-strain and micro cracks formed at the end of acceleration creep near initial grain boundary where dynamic recrystallization has not occur. Blocky M 23 C 6 carbides at grain boundary in creep ruptured non-strained sample and creep interrupted pre-strained sample at 1073 K at 140 MPa were shown in Figure 10. Particle size of grain boundary carbides in non-strained sample was about 176 nm whereas pre-strained sample showed slightly smaller particle size of 122 nm than the non-strained sample. It is considered that resistance against crack propagation at grain boundary was increased due to fine precipitation of grain boundary carbides, which explains increase of crack density in pre-strained sample shown in Figure 5. It is reported that fine and discontinuous precipitates at grain boundaries would enhance the notch toughness of alloys at elevated temperature because grain boundary migration and grain boundary sliding were limited [13]. Mino revealed that recrystallization at grain boundary during creep in cold worked Alloy617 deteriorated creep strength and grain boundary carbides suppressed grain boundary migration [9]. In this work, fine M 23 C 6 carbides at grain boundary in pre-strained sample were considered to enhance resistance against creep deformation at grain boundary during transient creep and crack propagation during acceleration creep. Accordingly, creep deformation mechanism in pre-strained Alloy263 was investigated in this study. In the transition creep, creep rate decreased by precipitation hardening of γ' and M 23 C 6 . carbides inside of the grain. Few effect of dislocation hardening was observed on the creep resistance of pre-strained samples. Then, non-strained sample started acceleration creep earlier due to coarsening of γ' at minimum creep rate. On the other hand, dynamic recrystallization and formation of η phase accompanying formation of precipitation free zone (PFZ) occured in the pre-strained sample at the same time. However, pre-strained sample kept transition creep due to the effect of fine precipitation of γ' and M 23 C 6 carbide inside of the grain. After onset of acceleration creep in non-strained sample, dynamic recrystallization and formation of η phase with formation of PFZ resulted in initiation of cracks at initial grain boundary where recrystallization had not finished. On the other hand, pre-strained sample showed acceleration creep due to coarsening of γ' phase. During acceleration creep deformation, cracks initiated at grain boundary while fine carbides at grain boundary suppressed crack propagation. On the basis of these results, increment of creep strength in pre-strained samples was due to the effect of fine precipitation of carbides and γ' inside of the grain which overcame weakening effect of dynamic recrystallization and the formation of PFZ at grain boundary. Conclusion In this study, the effect of pre-strain on creep strength and microstructure change during creep deformation in Alloy263 were investigated. The obtained results are as follows. (1) Minimum creep rate time and rupture time increased by pre-strain whereas rupture strain decreased drastically by introduction of 30% pre-strain in Alloy263. (2) Fine precipitation of γ' and M 23 C 6 inside of the grain was observed in pre-strained sample. In addition, grain boundary M 23 C 6 precipitated finer and by pre-strain. (3) η phase precipitated accompanying formation of PFZ at grain boundary during creep in Alloys263. However, area fraction of grain boundary covered by precipitates (ρ) did not changed by pre-strain. Furthermore, dynamic recrystallization at grain boundary was promoted by pre-strain. (4) Creep strength of pre-strained Alloy 263 increased due to fine precipitation of γ' and M 23 C 6 carbide inside of the grain by pre-strain. Weakening effect of dynamic recrystallization and precipitation of η phase during creep on creep properties of pre-strained Alloy263 have not be observed under present creep condition.
3,939.6
2017-10-13T00:00:00.000
[ "Materials Science" ]
Decision trees in epidemiological research Background In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods. Main text We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART) technique and the newer Conditional Inference tree (CTree) technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees. Conclusions Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation. Electronic supplementary material The online version of this article (doi:10.1186/s12982-017-0064-4) contains supplementary material, which is available to authorized users. Background The framing of medical research hypotheses and development of public health interventions often involve the identification of high-risk groups and the effects of individual factors on the relevant outcome [1,2]. For example, the prevalence of obesity in the United States has more than doubled in the past 30 years [3,4] and this trend can be associated with a complex combination of factors in the data. However, excessive calorie consumption and inadequate physical activity are not solely responsible for this problem; numerous other factors such as socio-economic differences, demographic characteristics, physical environment, genetics, eating behaviors, etc. also influence the energy intake balance and weight status. While individual effects can be measured efficiently, characterizing these factors in relation to an outcome of interest can be challenging. Effects of continuous variables (e.g., age) may be non-linear, and vary with other continuous (e.g., years of education) and categorical (e.g., sex) variables. Regression models have long been utilized for prediction and to examine the relationships between covariates and responses of interest. However, their ability to identify interactions between covariates Open Access Emerging Themes in Epidemiology *Correspondence<EMAIL_ADDRESS>and relevant population subgroups is restricted by the data analyst's decision about how covariates are defined and included in the model. For example, even in the very simple case of partitioning the population into two maximally distinct groups on the basis of a single continuous predictor X, one would need to fit separate models with categorical predictors indicating that X exceeded a particular threshold value, for many different threshold values. Since many candidate models may have to be investigated in this somewhat ad hoc manner, Type I error may be inflated. The main goal of this paper is to introduce and describe the family of statistical methods known as decision trees, a family which is particularly wellsuited to exploring potentially non-linear relationships between variables and identifying population subgroups who are homogeneous with respect to outcomes. Decision trees have been utilized to identify joint effects of air pollutants [5], generate a realistic research hypothesis for tuberculosis diagnosis [6], and recognize high-risk subgroups to aid tobacco control [7]. After providing a brief overview of decision trees, we introduce a novel data visualization technique for summarizing the subgroups identified by the trees. Next, we explore the differences between a commonly used technique for building decision trees, CART, and the conditional inference tree (CTree) approach which has not been widely used in epidemiological applications. Based on simulation results and analyses of real data, we discuss the relative strengths and weaknesses of these two approaches and the resulting implications for data analysis. Application: the Box Lunch Study Throughout this paper, we present examples and analyses based on variables collected in the Box Lunch Study (BLS), a randomized controlled trial designed to evaluate the effect of portion size availability on caloric intake and weight gain in a free living sample of working adults. The main randomized comparisons of the BLS (along with details of ethics approval and consent information) have been reported elsewhere [8,9]. However, the data also provides the opportunity to explore associations between outcomes and individual characteristics. Available covariates include demographic (e.g. age, gender, race, height, education), lifestyle (e.g. smoking status, physical activity levels), and psycho-social measures (e.g. frequency of self-weighing, degree of satisfaction with current weight). Responses to the Three Factor Eating Questionnaire (TFEQ) [10] quantifying the constructs of hunger, disinhibtion, and restraint were also recorded. The BLS also collected data on some novel, laboratory-based psychosocial measures that had not previously been measured in a randomized trial setting such as the relative reinforcement of food (rrvf ), liking and wanting. Software availability The analyses, simulations, and visualizations presented in this paper were all produced using the freely-available statistical software R [11][12][13][14]. External packages and functions used are referenced in the text. Code for our novel visualization is available at https://github.com/ AshwiniKV/visTree and for reproducing our example trees and our simulation study at https://github.com/ AshwiniKV/obesity_decision_trees. A brief introduction to decision trees A decision tree is a statistical model for predicting an outcome on the basis of covariates. The model implies a prediction rule defining disjoint subsets of the data, i.e., population subgroups that are defined hierarchically via a sequence of binary partitions of the data. The set of hierarchical binary partitions can be represented as a tree, hence the name. The predicted outcome in each subset is determined by averaging the outcomes of the individuals in the subset. The goal is to create a prediction rule (i.e., a tree) which minimizes a loss function that measures the discrepancy between the predicted and true values. Decision trees have several components, as illustrated in Fig. 1 which summarizes the association between the outcome of daily caloric intake and hunger, dis-inhibition, restrained eating, relative reinforcement, liking, and wanting. Nodes contain subsets of the observations; the root node of a tree (labeled with a '1' in Fig. 1) contains all observations (n = 226 in the Box Lunch Study). The key step in algorithms for constructing decision trees is the splitting step, where the decision is made on how to partition the sample (or sub-sample, for nodes below the root) into two disjoint subsets according to covariate values. The splits below a node are represented as branches in the tree. Splitting continues recursively down each branch until a stopping rule is triggered. A node where the stopping rule is satisfied is referred to as a leaf or a terminal node. Taken together, the terminal nodes define a disjoint partition of the original sample; each observation belongs to exactly one terminal node, depending on its covariates. A prediction for a new observation's outcome is made by determining (based on that observation's covariates) which leaf it belongs to, then combining the outcomes of the existing observations within that leaf to get a predicted value. In Fig. 1, both the outcome and predictors are standardized column-wise to have mean zero and variance equal to one. Standardization puts all the predictors on the same scale, which may be helpful when, as here, some of the predictors (e.g., rrvf, liking, and wanting) are measures that do not have universally agreed-upon units or methods of measurement 1 . For example, in Fig. 1, the root node with a label '1' as node ID partitions the population into two groups: (1) subjects whose hunger measurement is less than or equal to 1.69 standard deviations above the mean hunger, and (2) subjects whose hunger is greater than 1.69 standard deviations above the mean. Standardizing the outcome allows for a similar interpretation of the leaf nodes: the leaf with node ID = 6 has a value of 0.26, indicating that the mean 24-h energy intake for the subjects contained in this node (i.e., those with hunger ≤1.69, liking > − 0.28, and rrvf > − 1.26) is 0.26 standard deviations above the overall mean of 24-h energy intake. A mean of 0.26 standard deviations of 24-h energy intake corresponds to a value of 2190 kilo-calories 2 . Adjusting for covariates Often, factors such as age, sex, and education level may influence the outcome of interest and be associated with other predictors (i.e., they are confounders), but their effects are not of primary interest. In linear regression, it is common practice to adjust for such variables by including them in the regression model. In decision trees, an analogue to covariate adjustment involves building the tree using adjusted residuals, i.e., residuals from a regression model containing the confounders. To be precise, suppose that one wished to assess the effects of the predictors described in the previous sections, adjusting for age, sex, and BMI. Letting Y denote 24-h energy intake, one would first fit the model Given coefficient estimates β 0 ,β 1 ,β 2 , and β 3 , the age-, sex, and BMI-adjusted residuals for 24-h energy intake, Y * , are The residuals Y * can then be used as the outcome in a regression tree including the predictors of interest. This adjusted residuals technique can be easily applied using standard software. Visualizing subgroups in decision trees One of the most attractive features of decision trees is that they partition a population sample into subgroups with distinct means. However, the typical display of a decision tree (e.g., Figs. 1 and 2) does not always allow researchers to easily characterize these subgroups. The problem is particularly acute if some of the predictor variables do not have an interpretable scale built on established norms: the relative reinforcing value of food and degree of liking/wanting measured in the Box Lunch Study are novel and have not yet been widely used, so a standard unit of measurement has not yet been established. To address this limitation, we developed a software tool for visualizing the composition of subgroups defined by decision trees. The visualization consists of a grid of plots, one corresponding to each terminal node (i.e., population subgroup). In Fig. 3, each plot in this grid of plots corresponds to one of the four terminal nodes (population subgroups) in Fig. 1, i.e. nodes 3, 5, 6, and 7. In the background of each plot is a histogram summarizing the distribution of the outcome variable (here, 24-h energy intake) for the individuals in the terminal node/ subgroup. For example, the top left plot in Fig. 3 shows a distribution of (standardized) 24-h energy intake that is right-skewed. The numbers along the x-axis are the average 24-h energy intake within each individual bin of the histogram. The mean of the values contained in the bins of the histogram are presented for each individual bin. The vertical line shows the overall mean of the subgroup; the mean and subgroup size are shown in the plot title. Overlaid on the background are colored bars; the length and position of the bars represent the set of predictor values, on the percentile scale, which define the subgroup. The subgroup corresponding to the top left plot of Fig. 3 is defined by liking values below −0.28, which represents the 39th population percentile and hunger values that are below 1.69, which represents the 91st percentile. This visualization summarizes, at a glance, the characteristics of the groups determined by the regression tree. For instance, in Fig. 3, the four groups could be characterized as: Group 1 (N = 86): Moderate to low liking, all but very high hunger. This group has below-average energy intake (standardized mean = −0.46). Group 2 (N = 22): Moderate to high liking, very low relative reinforcing value of food, all but very high hunger. This group has moderate to low energy intake. Group 3 (N = 104): Moderate to high liking, all but very low relative reinforcing value of food, all but very high hunger. This group has moderate to high energy intake. Group 4 (N = 14): Very high hunger. This group has very high energy intake. The prediction rules defining these subgroups provide insight into the individual characteristics that can affect the outcome, and can be used to define categorical variables that could yield more meaningful and interpretable comparisons in future analyses. Methods for building decision trees Classification and regression trees (CART) The most popular method for constructing decision trees, known as CART (Classification and Regression Trees) was introduced by Breiman [18]. In a CART (e.g., Fig. 2), a split is sought to minimize the relative sum of squared errors in the two partitions resulting from the split. The search for splits in CART takes place across two dimensions simultaneously: the covariate to split on and splitting point within that covariate. In other words, the splitting step in CART is greedy: the best split is sought across all covariates and candidate split points for those covariates. For binary and categorical covariates, all possible values are considered as possible split points; for continuous covariates, an equally-spaced grid covering the range of possible values is usually considered. Because it searches over all possible splits on all covariates, CART is vulnerable to the so-called biased variable selection problem; there are more potential "good" splits on a continuous-valued covariate (or one with a large number of distinct values) than on a binary covariate. This tendency of CART to favor variables with many possible splits has been described in [18][19][20] and [21]. Furthermore, the nature of the splitting process makes it difficult to describe the statistical properties of any particular split. For instance, CART is not concerned with the notion of Type I error since it does not control the rate at which a regression tree identifies population subgroups when there is truly no heterogeneity in the mean of the outcome. Conditional inference trees (CTree) As an alternative to CART, Hothorn et al. [22] proposed the conditional inference tree (CTree). Unlike CART, CTree (e.g., Fig. 1) separates the splitting process into two distinct steps. The first step is to determine the variable to split on based on a measure of association between each covariate and the outcome of interest. Then, after the splitting variable has been determined, the best split point for that variable is calculated. In contrast to CART, CTree follows formal statistical inference procedures in each splitting step. The association between each covariate and the outcome is quantified using the coefficient in a regression model (linear regression for continuous outcomes and other suitable regression models for other outcome types), and a node is only chosen to be split if there is sufficient evidence to reject the global null hypothesis, i.e., the hypothesis that none of the covariates has a univariate association with the outcome. If the global null hypothesis is rejected, then the covariate that displays the strongest association with the outcome of interest is selected as a candidate for splitting. If the minimum p-value is larger than the multiplicity adjusted significance threshold, then no variable is selected for splitting and the node is declared a terminal node. Note that, despite its name, CTree bases splitting decisions on marginal (i.e., univariate) regression models; the "conditional" refers to the fact that, following the initial split, subsequent inference takes place within subgroups, i.e., conditional on subgroup membership. Stopping rules In both CART and CTree, splitting continues until a stopping rule triggers. In CART, splitting stops when the relative reduction in error resulting from the best split falls below a pre-specified threshold known as the complexity parameter. Typical values of this parameter are in the range of 0.001-0.05. To prevent overfitting, it is common practice to construct trees for a sequence of values of this parameter, and select the final value by minimizing prediction error estimated by cross-validation or on an independent test set. This process is referred to as pruning [23,24]. A slightly more conservative stopping rule sets the final complexity parameter to the value which yields a prediction error one standard deviation larger than the minimum estimated by cross-validation or on an independent test set. This is known as the 1-SE rule. As noted above, CTree's stopping rule is simple: splitting stops if the global null hypothesis is not rejected at the pre-determined, multiplicity adjusted level of significance. Comparing CART and CTree: a simulation study In this section, we describe simulated and real data and develop scenarios within a simulation study to highlight distinctions between CART and CTree. We also compare their predictive performance to standard regression models in a variety of settings and perform simulations utilizing the R statistical software package, version 3.3.0 [11]. The results of this study are presented in "Results" section. The CART algorithm was implemented using the rpart package [13], while the CTree was implemented via the partykit package [12]. We considered a variety of scenarios where we varied the data-generating function, covariate type (categorical vs. continuous), the sparsity (proportion of variables predicting the outcome), the total sample size, and the complexity parameter for CART. For all scenarios other than the one where sample size was varied, the sample size was fixed at 250 and in all scenarios trees were constructed using six covariates. Continuous outcomes were generated as independent N (η, 1) with linear predictor η varying across scenarios as described below. Continuous covariates were generated from independent Normal distributions with mean zero and unit variance; binary covariates were generated as independent Bernoulli(p = 0.5). Pruning for CART was carried out using both the minimum and the 1-SE rule, with the 1-SE rule being implemented using the DMwR package [14]. The tree-generating functions rpart (for CART) and ctree (for CTrees) were applied with arguments specifying a minimum of 20 observations for a node to be considered for splitting and a minimum of 7 observations in a terminal node. The complexity parameter for CART was held at the default value of 0.01. The level of significance in the CTree was held at the default value of α = 0.05. For each scenario, 10,000 simulations were performed, where in each simulation a training dataset was simulated and used to construct the trees, and tree performance was evaluated on an independently generated test dataset. Prediction error and tree complexity were summarized respectively via the mean squared error (MSE) and the number of terminal nodes (equal to the total number of splits in the tree, plus one). Effect of the data generating process Decision trees perform well in situations where the underlying population is partitioned into a relatively small number of subgroups with distinct means. However, they are less suited to scenarios in which the outcome varies continuously with covariate values. We started by generating independent normally distributed outcomes according to a pre-specified tree structure, i.e., set of splits to seven terminal nodes with mean values (−1.88, −0.30, −0.31, 0.25, −0.09, 2.23, 1.35), and unit variance. The candidate covariates for this tree included six continuous covariates (X 1 , . . . , X 6 ), mimicking the six covariates considered in the introductory examples above. This CTree is grown to consist of seven terminal nodes with splits at hunger, liking, rrvf, and disinhibition. In a different scenario, continuous responses are generated from N (η, 1) where η follows a regression model defined as and X 1 . . . X 6 are simulated as independent normally distributed continuous covariates. We also generated a hybrid model from normally distributed data with unit variance according to N (η, 1) with where X 1 , X 2 , and X 3 are simulated as independent normally distributed continuous covariates and are utilized to form distinct subgroups represented by three different indicator functions, indicated by 1. This hybrid model includes main effects of three continuous covariates along with interaction terms and subgroup indicators constructed from these covariates. Type I error We also evaluated the Type I error rate of the different tree-building algorithms. For a tree, we say that a Type I error occurs if a tree splits on a variable that has no association with the outcome. To evaluate Type I error, we generated six independent and normally distributed continuous covariates and a response with mean zero and unit variance, unrelated to the covariates. Figure 4 summarize the predictive performance of tree types as sample size changes. For each sample size n = 30, 250, 500, 1000, 3000, and 5000 we generated six covariates and continuous responses were generated from a N(η, 1) with η following a linear regression model: Comparing CART and CTree: a simulation study Effect of the data generating process The set of Tree results for the model that generates data from a tree structure in the first five rows of Table 1 summarizes the estimated prediction error (MSE) and tree complexity (mean, 20th, and 80th percentile number of terminal nodes) of CTree on the generated data with a η = 1.5X 1 + 1.25X 2 + 1X 3 + 0.85X 4 + 0.75X 5 + 0X 6 14:11 comparison to three other tree algorithms: the unpruned CART, CART with two types of pruning, and with the results from a linear regression model. As expected, all the tree-based techniques have lower MSE than linear regression. In this case, CTree produces trees with a similar number of terminal nodes to the CART pruned with the 1-SE rule but lower number of nodes when compared to the regular pruned CART. The CTree and both types of pruned CARTs have results for decision trees with 3-4 terminal nodes, in contrast to the generated tree structure with seven terminal nodes. This is likely due to the fact that our simulated tree data contained several nodes with very similar means. The second set of results in Table 1 (Regression) summarize performance for all four model types. The (correctly specified) linear regression model has far better predictive performance than the tree models. Interestingly, CTree has better predictive accuracy than the pruned versions of CART, a result which agrees with the findings of Schaffer [25] that pruning does not necessarily improve predictive accuracy, particularly when there are many (here, infinitely many) subgroups. For the hybrid scenario when data is generated from the defined hybrid model, we compare the performance of the trees to a partially misspecified linear regression model containing only the main effect terms for the continuous covariates and the results in Table 1 show that predictive accuracies are relatively similar. Type I error The results are presented in Table 2. We found that the unpruned CART algorithm continues to split and grow unlike the pruned CARTs and CTree. CARTs pruned using a 1-SE rule are rather conservative with a very low Type I error while the pruned CART and CTree have Type I errors that are closer to 0.05. As noted below, explicit control of the Type I error rate is an advantage of the CTree approach. Effect of sample size We observe in Fig. 4 that as sample size increases, the MSE of CTree continues to improve while that of the CART variants levels off beyond n = 500. The reason for this behavior is that CART's stopping rules are based on a complexity parameter, which sets a lower bound for improvement in model fit which is insensitive to sample size. In the rpart package, the default complexity parameter value is 0.01, so splitting stops if no split improves model fit by at least 1%. In this setting, the covariates have continuous linear effects, which implies an infinite number of population subgroups. Hence, most splits will yield small improvements in model fit, and CART variants will "stop too soon" and have poor predictive performance. In contrast, the stopping criterion for the CTree is based on p values, and maintaining a fixed p value threshold with increasing sample size allows splits associated with 14:11 smaller and smaller effect sizes to be represented in the tree. Application We illustrate the application of decision trees to the Box Lunch Study by comparing a linear regression model and decision tree that seek to predict 24-h energy intake (in kcal/day) using a set of 25 covariates measured at baseline. These prediction models were built on the covariates introduced in "Application: the Box Lunch Study" section such as restrained eating, rrvf, liking as well as other covariates that record demographic characteristics including age, sex, and BMI. Other covariates included were psycho-social measures such as "Influence of weight on ability to judge personal self ", "Ability to limit food intake to control weight (days/month)", and "Frequency of weighing oneself ". To provide a baseline for comparison, we present results from a linear regression model in Table 3. The covariates listed are those selected using backward elimination with the AIC. While there are many significant covariates in Table 3, this linear regression does not provide any information about potential interactions nor does it identify particular population subgroups that share similar values of the outcome. Figure 5 shows a conditional inference tree to predict total energy intake, adjusted for age, sex, and BMI, from 22 baseline covariates. The corresponding CART regression tree is provided in Additional file 1. The overall structure and splitting of the CART and CTree are similar, though CART has more splits than CTree. The prediction mean-squared error (using scaled energy intake values) for the conditional inference tree in Fig. 5 is 0.67 compared to 0.48 for the linear regression in Table 3. While the mean squared error is lower for linear regression, it may provide only limited scientific insight into the complex mechanisms underlying energy intake. Only the decision tree enables the identification of meaningful population subgroups and allows for formal inference about the defined groupings. For example, at the top level of the tree, the variable most strongly associated with (adjusted) total energy intake is snack calories (skcal, p < 0.001). Splitting the population according to snack calories ≤798.22 versus >798.22 produces two subgroups. Within the first group (following the left branch in Fig. 5), snack calories remain the most significant predictor of total energy intake (p < 0.001), while in the second group (the right branch of Fig. 5) none of the covariates are significantly associated with the outcome. The first group (skcal ≤798.22) again splits into two groups: snacking Fig. 5 Conditional inference tree representing the relationship between adjusted residuals for daily energy intake (adjusted for age, sex, and BMI) and 22 baseline covariates. Added Node ID labels in the terminal node. This is consistent with the titles for each subplot in Fig. 3 and the CTree in Fig. 1 Page 10 of 12 Venkatasubramaniam et al. Emerg Themes Epidemiol (2017) 14:11 calories ≤339.79 and >339.79 (but ≤798.22). In the former, "low snacking" group, the covariate most strongly associated with total energy intake is servings of sugarsweetened beverages (srvgssb, p = 0.01 ), which defines subgroups according to whether individuals consumed ≤ or >0.53 SSBs per day. In the latter, the strongest association is with hunger (p = 0.01), which splits into subgroups according to hunger ≤7 or >7. The lower hunger group splits one more time on snack calories. Within the former "low snacking" group that splits to define a subgroup that consumes ≤0.53 SSBs per day, the covariate most strongly associated with energy intake is servings of fruits and vegetables (srvgfv0, p = 0.044), which defines subgroups according to whether individuals consumed ≤ or >2.04 servings per day. In general, decision trees are typically used to describe the associations between a set of covariates and an outcome, and thereby identify population subgroups with different outcome values. In our setup, there is no one particular exposure or treatment variable of interest, so there is not one focal variable whose effect may be modified by others. However, recursive partitioning does identify relevant interactions between covariates, i.e., combinations of covariate values which result in different (mean) values of the outcome. Hence, if the term "effect modification" is identified with "interaction", then decision trees can be viewed as a tool for exploring effect modification. Figure 6 is composed of 7 sub-plots that represent each of the terminal nodes (i.e., subgroups) in Fig. 5. The top left sub-plot in Fig. 6 corresponds to node #5 (n = 23) in Fig. 5. The mean of adjusted residuals is −702.94, indicating that on average, individuals in this node have a daily energy intake 702.94 kcal lower than the age-, sex-, and BMI-adjusted population mean. In the top left sub-plot in Fig. 6, colored horizontal bars describe the population subgroup of node #5: individuals with low to moderate servings per day of sugar-sweetened beverages (≤0.53 servings per day, i.e., below the 60th population percentile), low servings per day of fruits and vegetables (≤2.04 servings per day, i.e., below the 25th population percentile) and low to moderate snack calories (≤339.79 kcal per day, below the 50th population percentile). The bottom row of plots corresponds to the three nodes which had the highest adjusted average caloric intake (+455.47, +486.66, and +1210.44 kcal/day relative to the adjusted population mean, respectively). These nodes defined three distinct subgroups: (1) low to moderate hunger (≤7, below the 80th percentile) and relatively high snacking (627-798 kcal/day, between the 89th and 92nd percentiles); (2) high hunger (>7, above the 80th percentile) and moderate snacking (340-798 kcal/day, between the 58th and 92nd percentiles); and (3) very high snacking calories (≥ 798 kcal/day, above the 92nd percentile). The fact that the first two of these groups have relatively similar adjusted mean daily caloric intake while being defined by distinct combinations of hunger and snacking levels (low hunger, moderate to high snacking in the first group vs. high hunger, moderate snacking in the second) 14:11 suggests that there are multiple pathways which lead to similar levels of consumption of excess calories. These distinct pathways may require different intervention strategies: for example, the low hunger but moderate to high snacking group might be effectively targeted by an approach which sought to reduce snacking opportunities, under the logic that due to their relatively low hunger level they are more likely to be snacking out of convenience than to satisfy a craving. The high hunger but more moderate snacking group, on the other hand, might be more responsive to an approach aimed at managing cravings. Yet another approach might be required to optimize outcomes for the third group whose extremely high adjusted daily caloric intake (+1210.44 kcal/day relative to the population) was associated with extremely high snacking but not hunger. Conclusions Decision trees can be a powerful tool in a researcher's data analysis toolbox, providing a way to identify relevant population subgroups which may provide insight into associations and effect mechanisms, and suggest strategies for tailoring interventions. In this paper, we compared two techniques for constructing decision trees, CART and CTree, and introduced a novel graphical visualization technique for decision trees which allows a researcher to see and compare the characteristics of these subgroups. Our focus was on describing relationships between a relatively small number of continuous or binary covariates and continuous outcomes in studies with moderate sample sizes, but decision trees can easily be extended to problems with larger sample sizes [26,27], greater number of covariates, and for modeling other covariate and outcome types [28,29]. The CTree approach in particular accommodates a wide variety of data types, including categorical and time-to-event outcomes, within the same statistical framework. While the data we used to illustrate the application of decision trees arose from a randomized controlled trial, we performed cross-sectional analyses on baseline data and hence did not use information on treatment assignment. As with any technique based on identifying statistical associations, decision tree methods do not estimate causal effects of individual characteristics or exposures in such cross-section analyses. The adjustment procedure we describe above allows the researcher to account for measured variables that are thought to be confounders, but the additional flexibility provided by decision tree models cannot correct for bias due to unmeasured confounding. Hence, conclusions based on decision tree analysis should be viewed as exploratory. In ongoing work, we are extending the decision tree framework to characterize (causal) treatment effect heterogeneity (i.e., causal effect modification) in the context of randomized intervention studies. The two decision tree fitting techniques we compared in this paper, CART and CTree have different strengths and weaknesses. CART has the advantage of availability: it is widely implemented in standard statistical software packages, while to our knowledge, conditional inference trees are currently only implemented in R. In our experiments, CART often had slightly higher predictive accuracy than CTree due to its additional flexibility. However, CTree offers several advantages over CART. First, CTree yields a simpler tree building process as compared to CART, since in CTree a single overall Type I error rate parameter (α) controls the size of the tree and removes the need for pruning. The α value can be set independent of the outcome type (e.g., continuous, binary, time to event, etc.), unlike for CART where the complexity parameter depends on the splitting criterion which may differ depending on the outcome type. By using formal inferential techniques incorporating multiplicity adjustments to select splits, CTree provides statistical guarantees and valid p values at each split. Hence, the researcher deciding which technique to use must consider the relative value of giving up a small amount of model flexibility and predictive accuracy to simplify modeling and gain the ability to make formal statistical statements based on the results from the fitted tree.
7,896
2017-09-20T00:00:00.000
[ "Mathematics", "Medicine" ]
Swarm Satellite Magnetic Field Data Analysis Prior to 2019 Mw = 7.1 Ridgecrest (California, USA) Earthquake : This work presents an analysis of the ESA Swarm satellite magnetic data preceding the Mw = 7.1 California Ridgecrest earthquake that occurred on 6 July 2019. In detail, we show the main results of a procedure that investigates the track-by-track residual of the magnetic field data acquired by the Swarm constellation from 1000 days before the event and inside the Dobrovolsky’s area. To exclude global geomagnetic perturbations, we select the data considering only quiet geomagnetic field time, defined by thresholds on Dst and a p geomagnetic indices, and we repeat the same analysis in two comparison areas at the same geomagnetic latitude of the Ridgecrest earthquake epicentre not a ff ected by significant seismicity and in the same period here investigated. As the main result, we find some increases of the anomalies in the Y (East) component of the magnetic field starting from about 500 days before the earthquake. Comparing such anomalies with those in the validation areas, it seems that the geomagnetic activity over California from 222 to 168 days before the mainshock could be produced by the preparation phase of the seismic event. This anticipation time is compatible with the Rikitake empirical law, recently confirmed from Swarm satellite data. Furthermore, the Swarm Bravo satellite, i.e., that one at highest orbit, passed above the epicentral area 15 min before the earthquake and detected an anomaly mainly in the Y component. These analyses applied to the Ridgecrest earthquake not only intend to better understand the physical processes behind the preparation phase of the medium-large earthquakes in the world, but also demonstrate the usefulness of a satellite constellation to monitor the ionospheric activity and, in the future, to possibly make reliable earthquake forecasting. In this paper, we search for possible electromagnetic satellite signals before the earthquake occurrence. Fraser-Smith et al. [3] found a clear magnetic disturbance at the ground in the ULF band of 0.05 Hz−0.20 Hz prior to the M7.1 Loma-Prieta earthquake that occurred in California on 17 October 1989. The data were taken from a ground magnetic observatory, very close (7 km away) to the impending earthquake epicentre. Despite the promising observation that came from a ground observatory, it is still possible to search for this type of anomalies in magnetic satellite data. Some of the first works that provided pieces of evidence in satellite data for electromagnetic disturbances that preceded the occurrence of earthquakes in the world came from the DEMETER satellite (e.g., [4][5][6][7]). In the last years, our research group has proposed some electromagnetic satellite anomalies from the European Space Agency (ESA) Swarm constellation, which is composed of three identical satellites in orbit from 22 November 2013 [8], and from the CSES-01 (China Seismo-Electromagnetic Satellite) dataset, prior to medium (M6.0-M7.4)-large (M7.5+) earthquakes in the world [9][10][11][12][13][14]. More recently, a Worldwide Statistical Correlation (WSC) analysis was applied on 4.7 years of Swarm magnetic field and electron density data, finding a significant correlation of concentrations of ionospheric anomalies with the worldwide shallow M5.5+ earthquakes in the same period [15]. Besides, they found that the largest concentrations of anomalies precede large earthquakes, with each anticipation time increasing with the magnitude of the seismic event, also confirming the Rikitake law [16] for electromagnetic pre-earthquake anomalies from satellite data. A possible mechanism that could explain these pre-seismic disturbances was described by Freund [17], who supposed a release of positive holes on the fault that could alter the lithospheric electric circuit, producing a chain of electrical, mechanical and chemical alterations of the atmosphere up to the ionosphere. Other different mechanisms were, for example, described by Pulinets and Ouzounov [18], based on radon gas release in the preparation phase of large earthquakes. In this work, we focus our attention on the Ridgecrest earthquake that occurred on 6 July 2019 and the possible electromagnetic anomalies detected by Swarm satellites during the preparation phase of the earthquake. This represents an extension of a recent paper [19] that analysed different physical quantities in the lithosphere, atmosphere and ionosphere, but here we are focusing especially on the magnetic field data of the Swarm mission. This paper is structured as follows: the first section presents data and methods used; the following section shows the results. Finally, we present some discussion and conclusions. Data and Methods We analysed the magnetic field data measured by the three identical satellites belonging to the Swarm constellation, called Alpha, Bravo and Charlie, respectively. They were launched by a single rocket on 22 November 2013 and are still in a quasi-polar orbit. After a few months of test, in-orbit calibration and commissioning, the satellites were put in the final orbital configuration: Alpha and Charlie fly almost in parallel at a lower orbit (in 2019 about 440 km above Earth surface) with a small separation of about 1.4 degrees, while the third satellite Bravo flies at a higher orbit (around 510 km in 2019) with a longitudinal shift that precedes along the mission time, and in 2019, it was about 90 degrees with respect to the orbit of the other two satellites. The orbital configuration was selected to take into account the different goals of the mission, mainly to measure the Earth's magnetic field and its variations, and in particular, to measure the Field Aligned Currents (FAC) and discriminate them from the lithospheric field. The satellites are equipped by several instruments to measure the Earth's magnetic field, to monitor the ionospheric plasma environment and to determine the orbit and orientation of the satellites as best as possible (e.g., by Global Navigation Satellite Systems-GNSS, laser retroreflector, accelerometers). In this work, we analysed the data of the Vector Field Magnetometer (VFM) and the Absolute Scalar Magnetometer (ASM) placed at the middle and at the end of a four-meter boom, respectively, both located at the back of each satellite. ESA downloads the raw data from Swarm satellites to the Kiruna and Svalbard stations and processes them in almost real-time (with a delay of 3-4 days only). The Agency provides calibrated magnetic open access data at Level 1b, where the measurements are provided not only in the instrumental frame but are also oriented in the Earth frame system NEC (North, East, Centre) at the original sampling frequency of 50 Hz (HR = High Resolution) and resampled at 1 Hz at the GPS o'clock seconds (LR = Low Resolution). In this work, we analysed the LR Magnetic Swarm product of all the satellites from 1000 days before the Ridgecrest mainshock. The data are provided with a quality check by means of 4 Flags: Flag_B and Flag_F are related to the quality of the measurement of each VFM magnetic field component and to ASM scalar intensity, respectively. Flag_attitude indicates if the pointing and attitude systems of satellites are working properly, and Flag_platform provides some information about the general status of the satellite platform, including, for example, the indication of the activation of the thrusters. In order to extract magnetic anomalies possibly related to the major seismic events, we need to remove the main magnetic field. We then apply an approach successfully used in previous works and well described in the Methods section of [15] under the name of the MASS (MAgnetic Swarm anomaly detection by Spline analysis) algorithm. In particular, the magnetic field data are analysed by a numerical approximation of the temporal derivative, and then a cubic-spline is fitted and subtracted to remove the long trend. Finally, a moving window (generally of 7 degrees in latitude) investigates the obtained residuals. The anomalies are defined by a threshold (named k t ) on the root mean square (rms) of the moving window compared with the Root Mean Square (RMS) of the whole track between −50 • and +50 • magnetic latitude. Only the tracks acquired in quiet geomagnetic field conditions (|Dst| ≤ 20 nT and a p ≤ 10 nT) and with instruments and satellites in nominal conditions (checked by Flags) are taken into account to search for anomalies. Finally, the algorithm can also produce a figure of the residual of the magnetic measurements (X, Y, Z and F) of the track, together with some orbital information and geomagnetic indices Dst and a p during the satellite passage. The epicentre of the earthquake and the Dobrovolsky's area (an approximation of the earthquake preparation area described in [20]), where we search for the electromagnetic anomalies, are automatically represented as well. Figure 1 reports the analysis performed by the MASS algorithm of the magnetic data from Swarm Bravo track 5 acquired on 6 July 2019. The figure shows a map (in panel e) with the Earth's surface projection of the satellite track; the colour is related to Flags: brown when the VFM instrument and satellite are in nominal condition, or light blue when Flag_attitude is equal to 18 (and the others are nominal). We note that this track preceded the earthquake occurrence by about 15 min. It presents two highlighted behaviours in the Y magnetic field component (panel b), underlined by a red circle and an orange one. The red circled anomaly is closer to the latitude of the earthquake, and it is entirely inside the Dobrovolsky's area (yellow circle on the map). The geomagnetic field conditions during this time were sufficiently quiet (Dst = 2 nT, a p = 4 nT, AE without particular activity during and in the hours before-as well as the Dst and a p -the passage of the satellite above the investigated region, so excluding any possible penetrating electric field from the auroral regions). All the samples are acquired with Flags that indicate data of good quality for science, as indicated by ESA [21]. As marked in Figure 1, the flagged sections of the track (in light blue in panel e) are due to a bright object (e.g., the Sun) in one of the three-star cameras. The other two cameras were nominal, and no other issues were detected on this track, so we confirm that all the samples can be considered good for the purpose of the present paper (as two cameras are more than sufficient to properly rotate the data from the instrument to NEC frame). The Swarm Alpha, Bravo and Charlie Y magnetic field component data have been systematically analysed. Here we have enough data to extend back the analysis until 1000 days before the earthquake, i.e., from 10 October 2016 to 6 July 2019. As significant concentrations of anomalies were found in the closest 3.34 degrees from the epicentre of the earthquakes by De Santis et al. [15], we decided to select a circular area of the same extension around the California earthquake. We applied the MASS algorithm, and for the analysis, we selected a threshold of kt = 2.5 within the sliding window of the 7-degree length in latitude. Figures 2 and 3 show the cumulative number of Swarm Alpha, Bravo and Charlie anomalies in a circular area of 3.34 degrees around the Mw = 7.1 California 2019 epicentre (blue line) compared with other validation areas in the US East Coast and Europe (EU), respectively, at the same magnetic latitude (red line) and with the same extension centred on geographic coordinates 32.92° N, 82.5° W and 39.45° N, 3.30° W, respectively. The comparison regions have been chosen on a similar context (i.e., above continental areas and so excluding fully oceanic ones) in order to compare potential similarities surely not due to California earthquake. We checked that in both comparison areas, and United States Geological Survey (USGS) reports no M4.5+ earthquakes during the analysed time (i.e., from 10 October 2016 to 6 July 2019). Results The cumulative number of anomalies around the earthquake epicentre presents several changes of slope that underline probably a particular geomagnetic activity, despite the fact that the data are selected only in quiet geomagnetic field time. Indeed, most of these behaviours happened also in the comparison areas, in some cases with some delay. In particular, the part of the cumulates between −600 and −500 days presents a very similar behaviour in all the considered areas (and an even steeper increase in the US East Coast around 550 days before the earthquake), pointing to a global effect affecting all the analyses. When the slope-change in the cumulate happens in all the areas, we can exclude a possible relationship with the impending seismic event and attribute this behaviour to some global (but small) perturbations of geomagnetic field, or at least those located in the Northern hemisphere. For this reason, even if there are two strong increases of anomalies at around −500 and −365 days (well visible also from the difference of the cumulates shown in Figures 2b and 3b), we tend to exclude a relationship with the preparation of the incoming earthquake. However, it is worth As significant concentrations of anomalies were found in the closest 3.34 degrees from the epicentre of the earthquakes by De Santis et al. [15], we decided to select a circular area of the same extension around the California earthquake. We applied the MASS algorithm, and for the analysis, we selected a threshold of k t = 2.5 within the sliding window of the 7-degree length in latitude. The cumulative number of anomalies around the earthquake epicentre presents several changes of slope that underline probably a particular geomagnetic activity, despite the fact that the data are selected only in quiet geomagnetic field time. Indeed, most of these behaviours happened also in the comparison areas, in some cases with some delay. In particular, the part of the cumulates between −600 and −500 days presents a very similar behaviour in all the considered areas (and an even steeper increase in the US East Coast around 550 days before the earthquake), pointing to a global effect affecting all the analyses. When the slope-change in the cumulate happens in all the areas, we can exclude a possible relationship with the impending seismic event and attribute this behaviour to some global (but small) perturbations of geomagnetic field, or at least those located in the Northern hemisphere. For this reason, even if there are two strong increases of anomalies at around −500 and −365 days (well visible also from the difference of the cumulates shown in Figures 2b and 3b), we tend to exclude a relationship with the preparation of the incoming earthquake. However, it is worth noting that for both periods (around 500 and 365 days before) the number of anomalies (i.e., the jump in cumulate) over the epicentral area is higher than over the EU comparison one, but in the US East Coast, this is not verified. Moreover, the slope-changes in the cumulative number of anomalies over California from −222 to −168 days (highlighted by two data tips in Figures 2a and 3a) and not present over the two comparison areas could be related to the preparatory phase of the California Ridgecrest earthquake. Furthermore, we checked if the anticipation time is compatible with the Rikitake law estimated for Swarm magnetic field data by De Santis et al. [15]. This empirical law is a linear relationship between the decimal logarithm of the anticipation time (∆T expressed in days) and the earthquake magnitude (M) as log(∆T) = a + b·M, where a and b are the two coefficients of the linear fit. For the increase of anomalies around the California earthquake highlighted in Figure 2, the logarithm of its anticipation time is around 2.2−2.3. De Santis et al. [15] estimated the same value for an Mw = 7.1 earthquake, as log 10 (∆T) = 2.7 (±1.8). Therefore, the detected anticipation time for the California earthquake is statistically compatible with the value already estimated. It is important to note that even if the analysed satellites are the same, the time period of this earthquake was not included in the statistical analysis provided by De Santis et al. [15], which included data until August 2018, so we can consider the present result a further validation. Geosciences 2020, 10, x FOR PEER REVIEW 5 of 10 noting that for both periods (around 500 and 365 days before) the number of anomalies (i.e., the jump in cumulate) over the epicentral area is higher than over the EU comparison one, but in the US East Coast, this is not verified. Moreover, the slope-changes in the cumulative number of anomalies over California from −222 to −168 days (highlighted by two data tips in Figures 2a and 3a) and not present over the two comparison areas could be related to the preparatory phase of the California Ridgecrest earthquake. Furthermore, we checked if the anticipation time is compatible with the Rikitake law estimated for Swarm magnetic field data by De Santis et al. [15]. This empirical law is a linear relationship between the decimal logarithm of the anticipation time (ΔT expressed in days) and the earthquake magnitude (M) as log(ΔT) = a + b•M, where a and b are the two coefficients of the linear fit. For the increase of anomalies around the California earthquake highlighted in Figure 2, the logarithm of its anticipation time is around 2.2−2.3. De Santis et al. [15] estimated the same value for an Mw = 7.1 earthquake, as log10(ΔT) = 2.7 (±1.8). Therefore, the detected anticipation time for the California earthquake is statistically compatible with the value already estimated. It is important to note that even if the analysed satellites are the same, the time period of this earthquake was not included in the statistical analysis provided by De Santis et al. [15], which included data until August 2018, so we can consider the present result a further validation. Geosciences 2020, 10, x FOR PEER REVIEW 5 of 10 noting that for both periods (around 500 and 365 days before) the number of anomalies (i.e., the jump in cumulate) over the epicentral area is higher than over the EU comparison one, but in the US East Coast, this is not verified. Moreover, the slope-changes in the cumulative number of anomalies over California from −222 to −168 days (highlighted by two data tips in Figures 2a and 3a) and not present over the two comparison areas could be related to the preparatory phase of the California Ridgecrest earthquake. Furthermore, we checked if the anticipation time is compatible with the Rikitake law estimated for Swarm magnetic field data by De Santis et al. [15]. This empirical law is a linear relationship between the decimal logarithm of the anticipation time (ΔT expressed in days) and the earthquake magnitude (M) as log(ΔT) = a + b•M, where a and b are the two coefficients of the linear fit. For the increase of anomalies around the California earthquake highlighted in Figure 2, the logarithm of its anticipation time is around 2.2−2.3. De Santis et al. [15] estimated the same value for an Mw = 7.1 earthquake, as log10(ΔT) = 2.7 (±1.8). Therefore, the detected anticipation time for the California earthquake is statistically compatible with the value already estimated. It is important to note that even if the analysed satellites are the same, the time period of this earthquake was not included in the statistical analysis provided by De Santis et al. [15], which included data until August 2018, so we can consider the present result a further validation. The "total windows" are the number of windows whose centres fall inside the investigated area during the quiet geomagnetic time. Table 1 summarises these values for each satellite and for the constellation as a whole. We noticed that the constellation presented more anomalies in the epicentral area with a normalised percentage from about 27% to 82%. The single satellite generally shows more anomalies in the epicentral area with respect to the comparison ones, with the exception of Alpha, which presents 22% less anomalies in the epicentral area with respect to the US East Coast comparison one. In the other cases, the satellites present more anomalies in the epicentral area with a normalised percentage from 42% up to 220%. Charlie is the satellite with the highest percentage of anomalies in the epicentral area with respect to both comparison ones. All the differences between the California and EU areas, reported in Table 1, are statistically significant. Considering all the satellites, in the epicentral area, the number of anomalies is 15 more than in the US East Coast comparison area and 32 more than in EU one, which corresponds to 27% and 82% more, respectively. Table 1. Number of anomalies, total windows and normalised percentage of anomalies detected in the epicentral area with respect to (w.r.t.) the comparison areas centred in US East Coast and in Europe, using the Swarm satellites. The anomalies have been obtained by considering 1000 days before the California earthquake that occurred on 6 July 2019. We also compared the distribution of the anomalies with respect to their local time in all the areas. A difference in local time distribution can be considered to support the possible link with the seismic activity. In particular, we checked whether the anomalies are concentrated at a particular time of the day. To be sure that any possible conclusion will not be affected by the influence of the Swarm constellation orbital parameters on the local time, the latter has been checked by analysing how it is distributed in the analysed period, as shown in Figure 4a. All the analysed windows with quiet geomagnetic field conditions (as above defined) are reported as dots: the colours for Alpha and Charlie satellites have been chosen to be the same for simplicity, considering that their local time differences are few minutes. The local time distribution of the analysed window confirms that the epicentral and comparison areas are equally covered for the same period. Figure 4b-d report the local time histogram distributions of the detected anomalies for the California area, US East Coast and EU regions. It is evident that in the epicentral area, there are more anomalies in "early morning" between 2 AM and 8 AM with respect to the comparison areas, where the anomalies are mainly distributed at midday (US East Coast), and at sunset and in the first hours after for both comparison areas. We can consider that even the different distribution in the local time of the anomalies can be a sign of different phenomena that produce these anomalies. To To To To Discussion and Conclusions By an automatic analysis of the anomalous Swarm three-satellite tracks, it has been possible to detect an increase of anomalies around 200 days before the 6 July 2019 California mainshock. Such an increase of anomalies is considered as possibly related to the preparatory phase of the California Ridgecrest earthquake. The result has been validated after comparison with two equivalent areas For the principal three jumps (indicated by coloured arrows in Figures 2a and 3a) in the cumulative number of anomalies at around 500, 365 and 200 days before the earthquake in the epicentral and comparison areas, the local time distribution has been depicted by different colour bars in the histograms. For both comparison areas, no anomalies have been detected 200 days before the earthquake. We noted that at 14 local time, in the epicentral area, some anomalies (4 over 10) have been detected 500 days before the event, but for the US East Coast, 3 anomalies have been detected 2 h before, proposing this as a regional non-seismic phenomenon. Discussion and Conclusions By an automatic analysis of the anomalous Swarm three-satellite tracks, it has been possible to detect an increase of anomalies around 200 days before the 6 July 2019 California mainshock. Such an increase of anomalies is considered as possibly related to the preparatory phase of the California Ridgecrest earthquake. The result has been validated after comparison with two equivalent areas centred at the same geomagnetic latitude and with a longitude that corresponds to US East Coast (82.5 • W) and to Europe-Spain (3.3 • W). The comparison is essential to exclude possible global perturbations of the geomagnetic field. The detected anticipation times are well compatible with those expected by the Rikitake empirical law, recently confirmed for satellite data by the statistical studies conducted by De Santis et al. [15]. Other increments of anomalies have been detected at about 500 and 365 days before the earthquake, but increases of the cumulates have also been detected in the comparison regions at similar times. We noted that such increases at about 500 and 365 days in the US East Coast comparison area are Geosciences 2020, 10, 502 8 of 10 even higher than above epicentral area. This made us conclude that these anomalies are a regional phenomenon in US, but it is not likely due to the preparation phase of the Ridgecrest earthquake. An open question is whether only one type of pre-earthquake Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) process exists or whether more phenomena could be involved during the earthquake preparation phase. The latter hypothesis seems to be more reliable, possibly explaining why some anomalies are closer in time to the event. In this paper, a magnetic anomaly in the Y component appears 15 min before the mainshock, while similar anomalies appear 9 days before the 2016 M7.8 Ecuador earthquake and 3 days before the 2016 M6.0 Italy earthquake, as found by Akhoondzadeh et al. [10] and Marchetti et al. [12], respectively. For the same earthquake, De Santis et al. [19] found a chain of Lithosphere-Atmosphere-Ionosphere anomalies even increasing in number toward the event. This work is complementary in the sense that the anomalies depicted in this paper covered the whole preparation phase investigated in De Santis et al. [19], with some anomalies with long anticipation time (Figures 2 and 3) until an early anomaly before the earthquake (shown in Figure 1). It is worth noting that the coverage of the three Swarm satellites is not uniform in time, so the analysed region is revisited from each satellite about twice per day (one during night-time and the other one in the daytime). Therefore, a broader satellite constellation could hopefully permit us to have more chances to detect such phenomena, and to have better time coverage worldwide, i.e., all the active seismic zones. The higher number of anomalies in the epicentral area with respect to the comparison ones suggests that at least some of the anomalies in the epicentral area could be due to the earthquake preparation phase. This result is similar to that obtained in the previous investigation conducted by De Santis et al. [14,15] in the frame of the ESA funded project SAFE (SwArm For Earthquake study). Moreover, the concentration of anomalies in a different local time with respect to those detected in the comparison areas can be considered a further hint in supporting this hypothesis. The presented analyses not only intend to better understand the physical processes behind the preparation phase of the medium-large earthquakes in the world but also to demonstrate the usefulness of a large satellite constellation to monitor the ionospheric geomagnetic activities and to investigate how long a seismo-induced ionospheric disturbance could be detected in the active seismic region before the event, which can be even several months before the mainshock. Finally, this type of analysis, together with lithospheric, geochemical and atmospheric data investigation, could possibly bring, in the near future, the capability to make reliable earthquake forecasting.
6,349
2020-12-01T00:00:00.000
[ "Geology", "Physics" ]
Electroporation e ff ect of ZnO nanoarrays under low voltage for water disinfection : It is quite necessary to develop a safe and e ffi - cient technique for disinfection of drinking water to avoid waterborne pathogens of infectious diseases. Herein, ZnO nanoarray electrodes with di ff erent sizes were investi - gated for low - voltage and high - e ffi ciency electroporation disinfection. The results indicated that the ZnO nano - pyr - amid with small tip width and proper length exhibited over 99.9% disinfection e ffi ciency against Escherichia coli under 1 V and a fl ow rate of 10 mL/min ( contact time of 1.2 s ) . The suitable size of the nanoarray for electropora - tion disinfection was optimized by establishing the cor - relation between four kinds of ZnO nanoarrays and their e ffi ciency of electroporation disinfection, which can guide the preparation of next - generation electroporation - disin - fecting electrodes. Introduction Drinking water safety is closely related to human health [1].Due to rapid economic development, water pollution has occurred in many parts of the world [2,3].Waterborne pathogens can lead to various infectious diseases, such as diarrhea, typhoid, and cholera.Most deaths from waterborne pathogen infections occur in poor countries that still do not have access to sanitation and electricity [4,5].To obtain safe drinking water, water disinfection treatment is required.Therefore, water disinfection treatment technology has always been the focus of attention [3,4,6].Commonly used chlorination has low cost and is easy to apply, but there is a safety risk of producing carcinogenic by-products [7,8].Ultraviolet disinfection and ozone disinfection have high energy consumption and limited disinfection capacity [9,10].Moreover, traditional disinfection methods are not easily applicable in poor areas without sanitation and electricity supply [11]. To better protect human health, it is critical to develop efficient disinfection technology with low safety hazards, low energy consumption, and easy operation as a complement to modern disinfection technology. A simple and effective application for inactivating different types of pathogens is electrodisinfection [12].Electrodisinfection is of great interest because of its high efficiency in water treatment without generating products potentially toxics or bacterial resistance [13].The effect of electrochemical disinfection is based mainly on the efficient generation of reactive oxidants (ROS), which not only inactivate bacteria but also degrade organic matter.In contrast, the effect of electroporation disinfection is based mainly on bacterial lysis and perforation triggered by a strong electric field, resulting in bacterial deactivation.Electroporation disinfection is more rapid compared to electrochemical disinfection [14].The electroporation phenomenon of cell generally refers to the formation of nanoscale defects or pores in cell membrane under strong electric field, resulting in increased permeability of cell membrane to ions and other non-permeable molecules.Irreversible electroporation will be triggered when the strength of electric field is high enough (10 7 V/m), leading to cell death [15].Traditional electroporation disinfection usually requires high applied voltage (10 kV) to generate strong electric field, which will not only lead to high energy consumption and high cost but also cause safety problems [16]. Since 2010, researchers have developed a bactericidal technique, which is using one-dimensional nanostructures to trigger electroporation for disinfection.Based on the locally enhanced electric field effect of the nanotips, irreversible electroporation of bacteria can achieve at a low applied voltage (<20 V), allowing for safe and low-cost electroporation disinfection [17].Electroporation disinfection with nanoarray-modified electrodes is considered a reliable disinfection technology due to its ability to maintain good sterilization performance at low energy and low cost [18,19].Electroporation disinfection is simultaneously performed with multiple disinfection mechanisms, including the production of ROS under electrical stimulation [14].The contribution of ROS in disinfection depends on the experimental conditions and the nature of the electrode.It was reported that in electroporation disinfection performed at voltages above 10 V, more ROS were produced due to higher voltages, so that the disinfection mechanisms included electroporation and oxidative stress [20,21].Under electrical stimulation, electrodes with longer contact time with bacteria, ROS became the main disinfection mechanism based on the electron transfer between the material and bacteria [22].To avoid the problems of unnecessary water decomposition and corrosion occurring at the nanoelectrodes, researchers used a voltage below the typical voltage of electrolytic water (<2 V) for electroporation disinfection, in which case electroporation was the main disinfection mechanism [23][24][25]. How to improve the disinfection efficiency is the key issue of the electroporation-disinfecting technology [26,27].Several kinds of nanoarray-modified porous electrodes of metal compounds were developed to achieve efficient disinfection [24,28].Ag nanoparticle-loaded CuO nanoarraymodified (Ag NPs-CuO NWs) copper foam electrode was reported to exhibit high flow rate disinfection at 10 V applied voltage [29].Carbon layer-coated Ag NPs-Cu 2 O NW-modified (C/Cu 2 O NWs-Ag NPs) copper foam electrode was prepared by coating a carbon layer on Ag NPs-Cu 2 O NWs, which enhanced the conductivity of the electrode while protecting the nanostructure using the carbon layer [21].Branch-structured CuO-Co 3 O 4 NWs were constructed on copper foam, and carbon film was coated on NWs to obtain a CuO-Co 3 O 4 @C NW-modified copper foam electrode, which further improved the efficiency of electroporation disinfection [30].However, the use of precious metals, such as Ag, will bring more safety risks in the disinfection process, complex branch-structure construction and carbon layer loading processes will increase the difficulty and economic cost of the electrode preparation, while there are still problems such as hydrolysis during disinfection above 10 V [23,25,31].These problems limit the practical application of electroporation disinfection technology.Therefore, it is important and necessary to find electroporation electrodes with simple preparation process, low material cost, and excellent disinfection performance to achieve efficient electroporation disinfection at low voltage. As we all know, the efficiency of electroporation disinfection is closely related to the enhanced electric field generated from the nanotips under applied voltage, which may be determined by the size and morphology of the nanostructure.Therefore, morphology modulation of nanoarrays as a simple and effective means should be explored and applied to the optimization of electroporation electrodes.Up to now, there is few report on the effect of nanoarray size on the efficiency of electroporation disinfection.In this article, four types of ZnO nanoarrays (ZnO NRs) with different tip widths and lengths were prepared on copper foam electrodes.Efficiency for electroporation disinfection of ZnO NRs was realized by morphological modulation, size regulation, and the effect of electroporating disinfection under low voltage.The correlation between morphology of the nanoarray and efficiency of electroporation disinfection has been recognized, which can pave a way to the technology of electroporation disinfection. Fabrication of ZnO nanoarray-modified electrodes ZnO NRs were prepared by a two-step hydrothermal method, and the fabrication process of the ZnO seed layer referred to the sol-gel method [32] (Figure 1a).The ZnO seed layer was pre-deposited on copper foam by dipcoating from a zinc acetate ethanol colloidal solution (5 mM) and then annealing at 400°C for 40 min.To obtain a certain density of the seed layer, the coating step was selected four times.Then, chemical bath deposition was used to grow ZnO NRs [33].A piece of copper foam substrate with the ZnO seed layer was immersed into a hydrothermal growth solution containing 50 mM of Zn Materials characterization The morphology of the ZnO NR-modified electrodes was characterized by a scanning microscope (SEM, COXEM EM-30).The crystal structure of the samples was tested by X-ray diffraction (XRD, Empyrean), with CuKα radiation in the 2θ range of 5-85°. Operation for water disinfection Two model bacterial suspensions (E. coli [ATCC 25922] and S. aureus [ATCC 6538]) were selected and diluted with 0.9% NaCl solution to 10 6 colony-forming units (CFU)/ mL.Before testing, two prepared ZnO NR-modified electrodes were assembled in an electroporation-disinfecting device in parallel, and the thickness of each electrode is 1 mm.The electroporation-disinfecting device is shown in Figure S1.Considering that the working surface of electroporation is about 1 cm 2 , flow rates were kept in the range of 5−20 mL/min, corresponding to contact times of 2.4-0.6 s. The disinfection performance was investigated under different flow rates (corresponding to different contact times) and voltages.Among them, the flow rate was controlled by a peristaltic pump.and the voltage was controlled by a DC power supply.First, the electroporation-disinfecting device was connected to the water pipe controlled by the peristaltic pump, and the DC power supply was connected to the positive and negative copper wire.During testing, the electrodes were in the absence of an applied electric field and the presence of 1 and 2 V external voltages (the bacterial concentration is about 10 6 CFU/mL and the flow rate is 10 mL/min, corresponding to contact time of 1.2 s).Contact time between bacteria and electrode ranged from 0.6 to 2.4 s, which was controlled by the flow rate in the electroporation-disinfecting device (the bacterial concentration is about 10 6 CFU/mL, and the external voltage is 1 V).Colony counts were used to calculate bacterial removal efficiency.To ensure reproducibility, tests were conducted triple times for each group.The inactivated rate was calculated according to the following equation: where E represents the inactivated rate, C i is the microorganism concentration in the influent, and C e is the microorganism concentration in the effluent (CFU/mL). 3 Results and discussion Characterization of ZnO NR-modified electrodes ZnO NRs on copper foam were fabricated by a two-step hydrothermal growth method.Among the four kinds of ZnO NRs on copper foam, ZnO nano-prisms, ZnO nano-prismoid, and ZnO nano-pyramids with similar density were obtained by introducing PEI to regulate the tip size of ZnO NRs.In addition, ZnO nano-needle with higher density was obtained by pre-oxidizing the copper foam substrate before growth.SEM images of four different tip widths of ZnO NRs are shown in Figure 2a-d.The PEI molecules tend to absorb on the crystalline surface of ZnO (100), increasing the relative c-axis growth rate, resulting in the formation of NR with uniform tip width [34].Therefore, the tip width of ZnO NRs gradually decreased with the increase in PEI concentration.In addition, the surface roughness of copper foam increased after pre-oxidized, leading to an increase in the surface roughness of the subsequently prepared ZnO seed layer (Figure S2).The increased surface roughness of the ZnO seed layer led to an increase in the nucleation sites at the initial stage of ZnO NR growth, resulting in smaller tip width and length of nanoarrays [35]. The average value of tip width and length of the nanoarrays in the given area was measured using particle size statistics software, and the results are shown in Figure 3.The average tip width and length of ZnO nano-prism were 465 nm and 4.9 µm, respectively.The average tip width and length of ZnO nano-prismoid were 265 nm and 4.8 µm, respectively.The average tip width and length of ZnO nano-pyramid were 75 nm and 4.9 µm, respectively.And the average tip size and length of ZnO nano-needle were 35 nm and 1.3 µm, respectively.The aspect ratio of nanoarrays was calculated in the order of ZnO nano-pyramid > ZnO nano-needle > ZnO nano-prismoid > ZnO nano-prism. Performance for electroporation disinfection of ZnO nanoarrays Applied voltage and flow rate were important parameters for evaluating the efficiency of electroporation disinfection. The disinfection efficiency of the four kinds of ZnO NR electrodes under different voltages and flow rates against two model bacterial suspensions was studied.Figure 5 shows the disinfection results of four ZnO NRs under different voltages.It could be seen that the disinfecting effect of all the four samples was not satisfied without added voltage, and the sterilization effect comes from the physical sterilization effect of ZnO NR [33].Under the condition of applied voltage, the disinfection efficiency of the four kinds of ZnO NRs all increased.ZnO nano-pyramid had the best disinfection effect, which can achieve over 99.9% disinfection efficiency against E. coli under an applied voltage of 1 V.Moreover, the disinfection efficiency of all four kinds of ZnO NR under 2 V was significantly higher than that under 1 V, indicating that the voltage had a large effect on the efficiency of electroporation disinfection.Compared with the intact and smooth membrane of untreated E. coli, treated E. coli had significant damage on the surface (Figure S3).The voltage value of 1 V was selected for safety and low energy consumption, which is less than 10.5 × 10 −3 J/S, as seen in Table S1. Figure 6 shows the disinfection results of four kinds of ZnO NR electrodes under different contact times.Due to the increase in the treatment flow rate, the contact time between bacteria and electrodes decreased, and the disinfection results show a gradual increase in the number of colonies in the counted Petri dishes (Figure 6a).This was due to the fact that at low flow rates, the bacteria were in contact with the ZnO electrode for a longer time and had a bigger chance for electroporation.We noticed that both ZnO nano-pyramid and ZnO nano-prismoid electrodes could achieve over 99% disinfection efficiency at a contact time of 2.4 s, illustrating the importance of sufficient contact time between bacteria and electrodes for electroporation disinfection.Meanwhile, the ZnO nano-pyramid had the best disinfection efficiency, which achieved 99.9% disinfection efficiency at a contact time of 1.2-2.4s (flow rates 5-10 mL/min), maintained over 90% disinfection efficiency at a contact time of 0.6-0.8s (flow rates 15-20 mL/min). Figure 7 shows the disinfection results of all four kinds of ZnO NR electrodes against E. coli and S. aureus.It was obvious from the colony count results that the disinfection efficiency of ZnO NR electrodes against E. coli was better than that against S. aureus.Again, ZnO nano-pyramid exhibited the best electroporation disinfection efficiency for both bacteria, achieving 99.9% for E. coli and 84.4% for S. aureus under flow rate 10 mL/min (contact time of 1.2 s) and applied voltage 1 V, indicating the superiority of this morphological nanoarray in electroporation disinfection.And we found that despite a small amount of detachment and inclination of the nanorods, the surface of the ZnO nano-pyramid electrode still maintained a good morphology (Figure S4). We know that bacterial inactivation during electroporation of nanoelectrodes may be caused by a strong electric field and oxidative stress [20].To explore the potential role of oxidative stress in bacterial inactivation, the bacteria were incubated with 10 mM glutathione (reduced form, GSH) as the influent microorganism to remove the contribution of oxidative stress to bacterial deactivation.The results show that, in this work, the disinfection efficiency is mainly attributed to the electroporation bactericidal effect caused by the strong electric field, and the oxidative stress caused by ROS is negligible (Figure S5).By comparing the disinfection performance of four kinds of ZnO NR electrodes, we found that the disinfection efficiency was in the order of ZnO nano-pyramid > ZnO nano-prismoid > ZnO nano-needle > ZnO nano-prism.Among them, the ZnO nano-pyramid with small tip width and the largest nanoarray length had the highest disinfection efficiency, which can achieve a disinfection efficiency of more than 99.9% for all 10 3 -10 6 CFU/mL of E. coli (Figure S6).Disinfection efficiency of ZnO nanoneedle with minimal tip width was not satisfied.This may be due to the fact that, although the tip width of ZnO nano-needle was minimal, its array length was short and dense, and such a morphology was not conducive to the electric field enhancement of the nanotips.These results indicated that for electroporation disinfection, the size of nanoarray should be pursued not only for the small size of the tip width but also for the suitable length.In addition, the comparison of the ZnO nano-pyramid electrode and other previously reported electrodes for water disinfection is shown in Table S1, further confirming the superior performance of the ZnO nano-pyramid electrode (Table S2). Conclusions In summary, the most suitable size of ZnO nanoarrays for electroporation disinfection was obtained by investigating the performance for electroporation disinfection of four kinds of ZnO NRs with different tip widths and lengths, and a simple and efficient ZnO electroporation electrode was prepared.The results indicated that the ZnO nanopyramid with small tip width and proper length exhibited over 99.9% disinfection efficiency against E. coli under a low voltage of 1 V and a flow rate of 10 mL/min (a contact time of 1.2 s).Here, the electroporation disinfection performance was mainly derived from the locally enhanced electric field of the nanotip, rather than from the ROS.Lastly, we concluded that the smaller tip width of nanoarray was beneficial for electroporation disinfection within a certain range, but the length of nanoarray was also important.A tip width of below 100 nm and a length of above 4 μm should be a more suitable size of nanoarray for electroporation disinfection.A study on the effect of nanoarray size on the efficiency of electroporation disinfection is important for the development of electroporation-disinfecting technology. Figure 1 : Figure 1: Electrode fabrication and construction of electroporation disinfection system.(a) Schematic illustration for the synthesis of ZnO NRs on copper foam electrodes.(b) Schematics showing the configuration of electroporation disinfection process. Figure 2 : Figure 2: SEM images of ZnO NRs hydrothermally grown on copper foam with PEI solution as (a) 1, (b) 4, and (c) 7 mM and (d) after preoxidation of copper foam, respectively. Figure 5 : Figure 5: Disinfection efficiency of ZnO nano-pyramid, ZnO nano-needle, ZnO nano-prismoid, and ZnO nano-prism electrodes at an applied voltage of 0-2 V and a flow rate of 10 mL/min (corresponding to contact time of 1.2 s), using 10 6 CFU/mL E. coli solution in a saline environment.(a) Optical images and (b) efficiency for electroporation disinfection of four ZnO NRs under 0-2 V applied voltages. Figure 4 : Figure 4: XRD patterns of ZnO NRs with different morphologies. Figure 3 : Figure 3: Topographical parameters of the nanoarray: tip width and length, determined by SEM (according to Nano Measurer). Figure 6 : Figure 6: Disinfection efficiency of ZnO nano-pyramid, ZnO nano-needle, ZnO nano-prismoid, and ZnO nano-prism electrodes at an applied voltage of 1 V and a contact time of 0.6-2.4s (corresponding to the flow rate of 5-20 mL/min), using 10 6 CFU/mL E. coli solution in a saline environment.(a) Optical images and (b) efficiency for electroporation disinfection of four ZnO NRs under 0.6-2.4s contact time. Figure 7 : Figure 7: Disinfection efficiency of ZnO nano-pyramid, ZnO nano-needle, ZnO nano-prismoid, and ZnO nano-prism electrodes at an applied voltage of 1 V and a flow rate of 10 mL/min (corresponding to a contact time of 1.2 s), against 10 6 CFU/mL E. coli and S. aureus solutions in a saline environment.(a) Optical images and (b) efficiency for electroporation disinfection of four ZnO NRs against E. coli and S. aureus under 1 V and a contact time of 1.2 s. 50 mM of HMTA, and PEI (1, 4, or 7 mM) at 95°C for 4 h.ZnO nano-prism, ZnO nano-prismoid, and ZnO nano-pyramid nanoarrays were prepared.ZnO nano-needle nanoarrays were grown by pre-oxidation (400°C, 20 min) of copper foam substrate before loading ZnO seed layer, and 7 mM PEI was added in the growth solution.After growth, all samples were washed with deionized water.
4,365.2
2023-01-01T00:00:00.000
[ "Environmental Science", "Materials Science" ]
STUDY OF 7 BUILDING DEVELOPMENT PLANNING SPECIFICATION OF STEEL FRAMEWORK STRUCTURE (BASED ON SNI 1729-2015) (CASE STUDY: OFFICE BUILDING CONSTRUCTION PROJECT) Steel is a material that is widely used in industrial development and buildings with functions as the main building frame. This plan aims to plan a building structure with 7 floors plus a ground floor that will function as an office building with analysis using a Structure Analysis Program (SAP 2000 v.19) combined with steel regulations on SNI 1729: 2015. Based on the results of SAP analysis 2000 v.19 produces a column structure using WF Steel 400x400x30x50 with the beam 1 using the profile of Steel WF 350x300x14x23 and for the beam 2 using the profile of Steel WF 200x200x9x14. For anchor and baseplate using anchor M-25 with a length of 400 mm, with baseplate thickness of 25 mm and for bolts using M-25 bolts with a total of 16 bolts. INTRODUCTION Steel is a structure that is often used in multi-story buildings. Structural planning can be defined as a mixture of art and science combined with an expert structure institution regarding structural behavior with the basis of knowledge in statics, dynamics, material mechanics and structural analysis. This plan uses the SNI 1729: 2015 regulations which are used in the calculation of current steel structures. In this plan using SAP 2000 v.19 modeling analysis combined with SNI 1729: 2015 regulations. This plan is used to analyze the capacity of the column structure, structure, and analysis on connection and anchor referring to the regulations concerning the applicable SNI. This study aims to analyze the construction of office buildings using a steel frame structure that uses SNI 1729: 2015, to determine the strength of column and beam structures and to determine the needs of column and beam structures. The location of this study is at Bukit Golf Citraland, Surabaya City. Review of Research Previously The steel structure is the type of steel based on economic considerations, its strength, suitable for load bearers (PADOSBAJOYO, 1994). Steel structures are widely used for multistory columns and beams, roof support systems, hangars, bridges, antenna towers, and others. During the period of introduction of steel as a building material until 1960, the steel used was carbon steel as the ASTM (American Society for Testing and Materials). Currently, there are many steel profiles that allow planners to increase the strength of the material in the area of stress, so there is no need to increase the size of the stem dimension. Planners can decide based on the maximum rigidity or the lightest weight. Structural planning according to Steel Building Building Specifications (SNI 1729: 2015) has a goal of producing a structure that is stable, strong, durable, and fulfills other objectives such as the economy and ease of implementation. A building is said to be stable if it is not easily rolled, tilted, or displaced during the age of the building plan. The risk of structural failure and loss of ability over the life of the plan can be minimized within acceptable limits. According to the journal from Budiman and HeriKhoeri with the title "Comparative Study of Steel Structures using WF Profile on HSS Profiles in Structure Columns" with conclusions where construction of construction is closely related to a structure that supports the construction. And the results of the study show that profiles using the HSS (Hollow Structural Section) profile column have a greater deviation, a greater stress ratio and a lighter weight of steel construction compared to the WF (Wide Flange) profile. Basic Theory Used Imposition Loading and combination of structural planning in building construction must take into account loadings such as dead loads, live loads, earthquake loads, wind loads, rain loads, and a structure must meet the strength of the plan by using a combination of SNI 1727: 2013 as follows: 1) 1.4D 2) 1.2D + 1.6L + 0.5 (La or H) 3) 1.2D + 1.6 (La or H) + (γL. L or 0.8W ) 4) 1.2D + 1.3 W + γL. L + 0.5 (La or H) 5) 1.2D ± 1.0E + γL. L 6) 0.9 D + (1.3 W or 1.0E) According to the Earthquake Resilience Planning Procedure for Building Structure and Non-Building (SNI 1726:2012) the effect of the earthquake plan that must be reviewed in the planning and evaluation of building structures and non-building as well as various parts and equipment in general. In determining a class of sites classified according to the properties of the soil on the site, which includes the class sites SA, SB, SC, SE, or SF based on the results of data on the land investigation. RESULTS AND DISCUSSION In planning a 7story office building plus 1 ground floor, the structure is planned and designed using the SAP 2000 v.19 program then an analysis of the working structure is carried out. Preliminary Design Based on preliminary data and planning analysis at the beginning, the steel structure will be used as follows: Column WF 400 x 300 x 10 x 16 pectrum Response Calculation of seismic load and spectrum response analysis in this office building construction project uses an excel program and is combined with spectra design applications on the website www.puskim.pu.go.id. The results of these data are the classification of soil types of soil rock sites (SB). in determining the spectrum response of the earthquake on the ground we use a seismic period of 0.2 seconds and 1 second, which results in: Sms = 0.662 Sm1 = 0.248 Sds = 0.44 Sd1 = 0.164 from the above data we get the spectrum response graph image: SAP Analysis and Structure The results of structural analysis using Structure Analysis Program (SAP 2000 v.19) then obtained the profile and calculation of connections as in the table below: Angkur, Baseplat, and Connection Analysis of Angkur and Baseplat which will be used in planning office buildings as follows: CONCLUSION Based on data analysis and discussion on the planning of 7-story office building + 1 ground floor, it can be concluded as follows: 1. Profiles used in the Preliminary design in the process of entering SAP analysis produce a value of 407 critical frames that must be changed in the steel profile. 2. 2. Changes in profile and that will be used in the planning of this office building are: Main beam / mains using profile of Steel WF 350 x 300 x 14 x 23,Child beam using Steel WF 200x 200x9x14 profile,For Columns using Profiles Steel WF 400x400x30 x50. 3. From SAP analysis then Check and control is carried out both on the main beam, beam, and Coloumn in axial, shear calculations. Moments produce structures that meet the requirements and are safe for use in office building structures. 4. In the anchor and baseplate discussion, the dimensions for anchor were obtained using M25 anchor with 400 mm anchor length, and25 mm planning for baseplate with a thickness of. 5. At the connection structure of this building using a bolt connection, with the results of an analysis using M-25 bolts with the number of bolts 16. Suggestions Suggestions that the writer can convey with the results of the analysis that has been carried out about the building of a steel frame structure are: 1. Check and control again on the calculation of buildings and placement for these structures both on columns and beams, 2. Need analysis on structural stiffness building, 3. It is necessary to analyze the calculation for pile, sloof and pilecap foundations in the planning of the office building.
1,745.8
2020-10-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Arsenic and Heavy Metals in Sediments Affected by Typical Gold Mining Areas in Southwest China: Accumulation, Sources and Ecological Risks Gold mining is associated with serious heavy metal pollution problems. However, the studies on such pollution caused by gold mining in specific geological environments and extraction processes remain insufficient. This study investigated the accumulation, fractions, sources and influencing factors of arsenic and heavy metals in the sediments from a gold mine area in Southwest China and also assessed their pollution and ecological risks. During gold mining, As, Sb, Zn, and Cd in the sediments were affected, and their accumulation and chemical activity were relatively high. Gold mining is the main source of As, Sb, Zn and Cd accumulation in sediments (over 40.6%). Some influential factors cannot be ignored, i.e., water transport, local lithology, proportion of mild acido-soluble fraction (F1) and pH value. In addition, arsenic and most tested heavy metals have different pollution and ecological risks, especially As and Sb. Compared with the other gold mining areas, the arsenic and the heavy metal sediments in the area of this study have higher pollution and ecological risks. The results of this study show that the local government must monitor potential environmental hazards from As and Sb pollution to prevent their adverse effects on human beings. This study also provides suggestions on water protection in the same type of gold-mining areas. Introduction Non-ferrous metal mining causes severe contamination of As and heavy metals (HMs). Due to the increasing activity and mobility in water [1,2], sediments are one of the major destinations for As and HMs migration and diffusion in the water [2,3]. As and HMs accumulated in sediments are often tens or even hundreds of times the amount in water [3][4][5]. The relatively stable and conducive sedimentary environment transforms them into more harmful pollutants or chemical fractions [6]. Once the sedimentary environment is changed by external interference, As and HMs that are accumulated in sediments will be resuspended, which will cause secondary pollution to the water environment [6,7], especially the As and HMs with high mobility that have been affected by the mining area. Therefore, focusing on the accumulation and contamination of As and HMs in sediments around the non-ferrous metal mining area is necessary. Notably, many factors often affect the migration, diffusion, accumulation and release of As and HMs from water to sediment [8][9][10][11]. The heavy metal pollution caused by gold ore should also be paid attention to because of its low grade, and large amount of waste rock and tailings [12]. However, due to the acceleration of industrialization and urban process, extensive mining activities are one of the main reasons for China's serious environmental problems [13,14]. At present, heavy metal mining pollution and ecological risks are the focus of China's water and soil protection, especially in the southwest [15][16][17], where special action needs to be urgently Study Area The study area is northeast of Southwest Guizhou Province [22]; it has a subtropical monsoon climate, humid and rainy, flat terrain, with an altitude of 1400-1726 m. The annual average temperature is 15.2 • C, and the average precipitation is 1320.5 mm. This area is a typical karst hilly landform, mainly composed of limestone (consisting mainly of calcium carbonate) mixed with a small amount of sandstone and mudstone (in the middle of the study area) and clay rock (in the northeast of the study area). The surface water and groundwater in this area are intertwined, and there are four relatively independent karst rivers, Xiaozichong (SZ), Huashiban (HS), Taiping Cave (TC), and Shamba (SB), flowing from southwest to northeast ( Figure 1). Gold-bearing minerals are usually pyrite (FeS 2 ) and arsenopyrite (AsFeS) [32]. According to the survey, the gold extraction process in this area is mainly Charcoal Immersion Leaching (CIL). In addition to gold mining, cultivation and animal husbandry are the other major activities. Sample Collection and Analysis In March 2021, 25 sediment sample points were selected from four karst rivers (SZ, HS, TC and SB), an abandoned mine pit (M 1 ), and a mine drainage collection pond (M 2 ) in the study area. All samples were collected and uniformly mixed in a single sample point in a polyethylene sealed bag to obtain the final sample of this sample point, which were then sealed and stored at 4 • C and sent for laboratory analysis. Before testing, the sample was dried in a freeze-dryer (10N-50A, Jingfei Technology, Shenzhen, China) and then screened and impurities were removed. The 1:5 mixed solution of sediment: water (m (g)/V (mL)) was shaken and allowed to stand for 30 min to detect the pH of the sample (phs-3c, Rex instruments, Hangzhou, China). HNO 3 -HF digestion system was selected for hermetic digestion at 180 • C until As and HMs in the sample were completely released, and their concentration was detected. BCR sequential extraction method (GB/T 25282-2010) was selected to extract four chemical forms of As and HMs. An inductively coupled plasma mass spectrometer (ICP-MS, iris intrepid II XSP, Agilent) was used to detect all steps, and the grades of all chemical reagents used were excellent purity. Sample Collection and Analysis In March 2021, 25 sediment sample points were selected from four karst rivers (SZ, Figure 1. Location, lithology and sampling points of the study area. Except for "Zimudang, Beijing, Guizhou Province, Southwest Guizhou Autonomous Prefecture", the other place names in the figure represent villages. Quality Control and Statistical Analysis In this study, the national standard sediment sample (GBW07382 (GSD-31)) and 20% parallel samples were used to ensure the precision and accuracy of the analytical procedure. The results showed that the recoveries of samples were 94.10-113.83%, and the repeatability of parallel samples was 91.10-109.49%. The data were statistically analysed and plotted using Origin 2021 and ArcGIS 10.6 software, and Pearson correlation analysis and source identification were performed using SPSS 20.0 and EPA PMF 5.0. Background Value of As and HMs, Contamination Assessment Index, and Source Analysis Model Considering that the regional background value is more appropriate than the average crust or average shale data, the local HMs soil background value of Guizhou Province was adopted as the background value (BV) [33] of the study area. A positive matrix factorization analysis model (PMF) [34] was used to identify the sources of As and HMs accumulation; Assessment of arsenic and HMs pollution and ecological risk based on the geo-accumulation Index (Igeo) [35], single ecological risk factor (Eir) and potential ecological risk index (RI) [36]. Concentration and Accumulation Changes of As and HMs in Sediments Tables S1 and S2 (Supplementary Materials) show the concentration and statistical results of As and HMs in sediment samples. In general, except for Pb, the average concentration of As and other HMs was higher than or even much higher than their BV. In contrast, the average concentration of As and HMs in the sediments of the SB karst water system farthest from the mining area was the lowest compared with others, indicating that their input was affected by external activities, among which mining activities in the mining area cannot be ignored. Figure 2 shows the cumulative changes of As and HMs in sediment samples with the flow direction. The accumulation changes of 1 As and Sb, 2 Zn and Cd and 3 Cr, Co, Ni, and Cu in the sediments were similar, indicating that these three groups may have similar accumulation rules and the same accumulation source, respectively. The cumulative degree of As, Sb, Zn and Cd in the mining area (M) and its surrounding sediments (SZ 2 , TC 2 -TC 4 ) was very significant, especially the cumulative concentration of As and Sb in M1 and M2 sediments, which was 93.54 and 6.60 times of BV and 408.51 and 31.58 times of BV respectively. Simultaneously, the accumulation of As and Sb in the sediments downstream of the confluence, that is, the intersection of the mining wastewater treatment plant outlet and TC karst river (TC 4 ), was also very high, reaching the peak of the TC karst river. This indicates that gold mining seriously impacts the accumulation of As, Sb, Zn and Cd in the sediments, especially As and Sb [37][38][39], similar to the study of other gold deposits [40][41][42][43]. In addition, the As and HMs background in the gold mining area may also be another reason [44]. Overall, there are two phenomena: First, the cumulative concentrations of As, Sb, Zn and Cd in the sediments of the four karst rivers gradually increase with the flow direction. Secondly, the cumulative As and HM concentrations in the sediments at the confluence of all main streams and tributaries (TC 2 and SB 2 ) also increased sharply, indicating that they were affected by water transport because aqueous transport is one of the main ways of migration and diffusion of As and HMs in the dissolved phase [45]. In addition, the accumulation of As, Sb, Zn and Cd in Figure 2 presents a non-unimodal pattern with several prominent concentration turning points, such as TC 5 , TC 6 , SB 4 and SB 5 , indicating that their accumulation may also be affected by other factors. Compared with As, Sb, Zn and Cd, only part of Cr, Co, Ni and Cu of the sediments slightly decreased around the mining area (SZ 2 and TC 4 ); however, the cumulative characteristics related to the mining area were not evident, which needs further analysis. Overall, there are two phenomena: First, the cumulative concentrations of As, Sb, Zn and Cd in the sediments of the four karst rivers gradually increase with the flow direction. Secondly, the cumulative As and HM concentrations in the sediments at the confluence of all main streams and tributaries (TC2 and SB2) also increased sharply, indicating that they were affected by water transport because aqueous transport is one of the main ways of migration and diffusion of As and HMs in the dissolved phase [45]. In addition, the accumulation of As, Sb, Zn and Cd in Figure 2 presents a non-unimodal pattern with several prominent concentration turning points, such as TC5, TC6, SB4 and SB5, indicating that their accumulation may also be affected by other factors. Compared with As, Sb, Zn and Cd, only part of Cr, Co, Ni and Cu of the sediments slightly decreased around the mining area (SZ2 and TC4); however, the cumulative characteristics related to the mining area were not evident, which needs further analysis. Figure 3 shows the percentage of the sediment's four chemical forms of As and HMs. A significant difference in the percentage between As and a single HM was observed. In the non-residual fraction, (1) the percentage of the F1 fraction of As, Cd and Co was relatively high, approximately 0-1/3, followed by Sb, Zn and Ni, approximately 0-1/6. However, owing to the strong correlation between the first two HMs, Sb and Zn and aluminum, manganese and iron particles, their fluidity was generally lower than that of other HMs, consistent with this study [46]. On the contrary, Cr, Pb and Cu had almost no F1 fraction. Specifically, the F1 percentage of As, Sb, Zn and Cd in the sediment was signifi- Figure 3 shows the percentage of the sediment's four chemical forms of As and HMs. A significant difference in the percentage between As and a single HM was observed. In the non-residual fraction, (1) the percentage of the F 1 fraction of As, Cd and Co was relatively high, approximately 0-1/3, followed by Sb, Zn and Ni, approximately 0-1/6. However, owing to the strong correlation between the first two HMs, Sb and Zn and aluminum, manganese and iron particles, their fluidity was generally lower than that of other HMs, consistent with this study [46]. On the contrary, Cr, Pb and Cu had almost no F 1 fraction. Specifically, the F 1 percentage of As, Sb, Zn and Cd in the sediment was significantly higher than Cr, Co, Ni and Cu; they have higher mobility. Therefore, they can be easily used by aquatic organisms for gold mining, especially since the gold-carrying minerals and associated minerals of this type of gold deposit are rich in a large number of As and Sb [37][38][39], consistent with our previous analysis. (2) Almost all the F 2 and F 3 fractions of As and HMs have a certain proportion. Both fractions are vulnerable to the impact of redox fluctuations. They have unstable characteristics, indicating that they have high weak retention in the sediment [47] and are likely to be highly sensitive to some influencing factors that can control or adjust redox conditions. For the residual fractions, (3) the proportion of the F 4 fraction in the sediment sample was the highest among all fractions, with Cu and Sb being the most prominent (the F 4 fraction proportion in 98.2% of the sediments exceeded 75%), followed by Ni and As, indicating that the geological background was their non-negligible source, and their solubility reduces to a certain extent [47]. Chemical Forms of As and HMs in Sediments with Cu and Sb being the most prominent (the F4 fraction proportion in 98.2% of the iments exceeded 75%), followed by Ni and As, indicating that the geological backgro was their non-negligible source, and their solubility reduces to a certain extent [47]. . Table S3 (Supplementary Materials) shows Pearson correlation coefficients of As HMs in sediments. There is a significant correlation between these elements (ρ < 0 consistent with the analysed results. In the PMF model, the estimated factors are set t Cumulative Sources of As and HMs in Sediments Table S3 (Supplementary Materials) shows Pearson correlation coefficients of As and HMs in sediments. There is a significant correlation between these elements (ρ < 0.01), consistent with the analysed results. In the PMF model, the estimated factors are set to 2, 3, 4 and 5, and the predicted and the measured values are fitted, respectively. The fitting results show that it is best when the factor number is 3 because its Q robust (326.4) is closest to Q ture (419.9). Figure 4. respectively show the source contribution and contribution rate of As and HMs in sediments and the percentage of each source when the factor number is 3. nviron. Res. Public Health 2023, 20, x FOR PEER REVIEW 7 . Figure 4. Contribution value, contribution rate and source percentage of As and HMs concentr in sediments obtained by the PMF model. In factor 2, As and HMs have a certain proportion (26.4-56.6%). Therefore, from percentage of their chemical forms, the geological background is the source of As an HMs in sediments that cannot be ignored. Concurrently, it is also considered that s western China has a background of high concentrations of metalloids and HMs [47 Therefore, factor 2 is regarded as a source related to geological background. In factor 1, the contribution rates of Cr, Co, Ni, Cu and Pb were higher (both gr than 34.8%), while the contribution rates of As, Sb, Zn and Cd were lower (both less 26.3%). First, we did not observe the cumulative characteristics of Cr, Co, Ni, Cu an related to the mining area. Secondly, compared with As, Sb, Zn and Cd, their che activity was lower. Considering that the local villages are evenly distributed (i.e., the of human activities, Figure 1), these all indicate that they may be more affected b continuous input of other man-made sources than the gold mining sources, such a mestic sewage. In addition, according to the survey, the local cultivation and anima bandry are well developed, and some studies show that organic and compound ferti contain specific concentrations of Cr, Co, Ni and Cu, especially Cu, which can stim plant growth and improve crop yield [50][51][52][53]; thus, agricultural activities may also po it. In conclusion, factor 1 is another source mainly based on other human activities cause they are not closely related to gold mining, they will not be explained in det In factor 3, the As, Sb, Zn and Cd had the highest contribution rate (both greater than 40.6%). From the analysis, mining of gold mines has a significant impact on them; therefore, factor 3 is regarded as the source related to it, and the proportion of this part of the source is nearly half in the two groups. In factor 2, As and HMs have a certain proportion (26.4-56.6%). Therefore, from the percentage of their chemical forms, the geological background is the source of As and all HMs in sediments that cannot be ignored. Concurrently, it is also considered that southwestern China has a background of high concentrations of metalloids and HMs [47][48][49]. Therefore, factor 2 is regarded as a source related to geological background. In factor 1, the contribution rates of Cr, Co, Ni, Cu and Pb were higher (both greater than 34.8%), while the contribution rates of As, Sb, Zn and Cd were lower (both less than 26.3%). First, we did not observe the cumulative characteristics of Cr, Co, Ni, Cu and Pb related to the mining area. Secondly, compared with As, Sb, Zn and Cd, their chemical activity was lower. Considering that the local villages are evenly distributed (i.e., the area of human activities, Figure 1), these all indicate that they may be more affected by the continuous input of other man-made sources than the gold mining sources, such as domestic sewage. In addition, according to the survey, the local cultivation and animal husbandry are well developed, and some studies show that organic and compound fertilizers contain specific concentrations of Cr, Co, Ni and Cu, especially Cu, which can stimulate plant growth and improve crop yield [50][51][52][53]; thus, agricultural activities may also pollute it. In conclusion, factor 1 is another source mainly based on other human activities. (Because they are not closely related to gold mining, they will not be explained in detail in the following analysis). Influencing Factors of As and HMs Accumulation in Sediments In addition to the two factors of gold mining and water transportation, we further discussed the two factors of pH and F 1 fraction. Concurrently, considering the geological characteristics of karst areas, we also discuss the lithology of this area. Table S1 and Figure 1 show the lithology of the corresponding area for each sediment. Interestingly, the cumulative changes of As and HMs are very similar to the lithological changes of the sediment's area. For example, the sediments of several karst rivers (SZ 2 , TC 2 -TC 4 , TC 7 -TC 8 , SB 4 , SB 6 ) located in clay rock and sand mudstone areas also show the characteristics of "piecewise" accumulation. Previous studies have shown that colloid flow is the main migration and diffusion of As and HMs besides aqueous transport [11,54,55]. This flow mode is sensitive to the redox state and changes the stability of solid iron (oxygen) oxides, leading to the decomposition of organic matter, thus affecting the migration and diffusion of As and HMs [56]. Notably, As and HMs in almost all sediments in this study showed weak retention and were vulnerable to redox fluctuations (with a high proportion of F 2 and F 3 fractions). Studies have shown that lithology can adjust redox fluctuations by influencing soil particle size, mineral composition, and clay content and can indirectly affect the migration and diffusion of As and HMs. In particular, media with high clay content or rich reducible iron (hydroxide) oxides are more easily affected, making As and HMs in the aquatic environment re-accumulate and re-fix [56][57][58]. Therefore, once the As, Sb, Zn and Cd in the water system pass through areas with high clay content (sand mudstone and clay rock in this study), they are very easy to be adsorbed or fixed and finally present high cumulative concentration in the sediments, even in the sediments of non-gold mining areas (SB 4 ). This mechanism may also play a role in strengthening the accumulation of As, Sb, Zn and Cd in the sediments of gold mining areas. Considering some As, Sb, Zn and Cd in the upstream are adsorbed or fixed, the cumulative concentration of As and HMs in the downstream sediments (SB 3 , TC 5 and SB 5 ) will be significantly reduced because they have moved from the area with high clay content. In addition, owing to the characteristics of limestone, which can hardly hinder their migration in water, they will continue accumulating to the next area with higher clay content (TC 7 -TC 8 , SB 5 -SB 6 ). Generally, As and HMs, mainly composed of residue fraction (F 4 ), are more stable in the water environment [59][60][61]. However, in this study, the relationship between the percentage change of the F1 fraction in As, Sb, Zn and Cd and the total concentration ( Figure 5) shows that there is still a significant negative correlation between them, especially in As and Sb. Furthermore, it shows that As, Sb, Zn and Cd with high mobility can quickly be released from the sediment [62,63], reducing cumulative concentration, even in the sediment of high clay areas (TC 7 -TC 8 ). In addition, limestone (mainly carbonate) also has a certain buffer effect. These stable weak acids, weak alkalis, or neutral pH values can reduce the concentration of soluble As and HMs in karst areas [10]. Most sediment samples in the study may also be affected by this phenomenon because their pH values are neutral or weakly alkaline (7.13-8.86), and only a few sediment samples (SZ 1 , SZ 4 , TB 3 and M 1 ) are weakly acidic (6.23-6.94). 5) shows that there is still a significant negative correlation between them, especially in As and Sb. Furthermore, it shows that As, Sb, Zn and Cd with high mobility can quickly be released from the sediment [62,63], reducing cumulative concentration, even in the sediment of high clay areas (TC7-TC8). Figure 6 shows the comparison of Igeo, Eir and RI of As and HMs in the sediments of the gold mine area and four karst rivers. Igeo and Eir indexes show that the pollution degree of As and other HMs is between grade I and grade II, and the risk level is slight to medium in most sediments except Pb. The contamination degree and risk value of As and Sb were the highest in the sediments; 64% and 4% of the sediments above grade III were polluted, and 24% and 12% of the sediments above high risk were polluted. Notably, as high as 20% of the sediments reached Grade V (severe) pollution, and As and Sb of the extremely high-risk sediments reached 20% and 4%, respectively, and most of them were from the mining area (M) and the TC karst rivers. Especially in the mining area (M 2 ), their Eir values were up to 4085.13 and 947.41, respectively, which far exceeded the extremely high-risk standard of the Eir index (Eir > 320). Therefore, we must be alert to their excessive input. Contamination and Risk Assessment Based on Igeo, Eir and RI Indexes Generally, the average RI index is arranged in descending order: M > CA > SZ ≈ TB > SR, consistent with the previous analysis. Except for Pb, arsenic and other heavy metals are mostly in the range of slight risk to medium risk (12~20%), and 20% and 16.7% of the sediments in SZ and SB rivers were at high risk. However, the sediments above high risk all appeared in TC River and the mining area (M), accounting for 37.5% and 100% of the sediments in their respective water systems, respectively. Comparison between This Study and Sediments around Other Gold Mining Areas The average concentration, Igeo and RI values of As, Sb, Zn and Cd in the sediments of this study were compared with the surface sediments around other gold mining areas (Figure 7). Figure 6. Comparison of Igeo, Eir and RI of As and HMs in sediments of gold mine area and f karst rivers. (1) Igeo index: level 0, representing no pollution; Grade I, representing no to moder pollution; Grade II, representing medium pollution; Grade III, representing moderate to relativ high pollution; Grade IV, representing relatively high pollution to high pollution; Grade V, rep senting high pollution to extreme pollution; Grade VI, representing extreme pollution. (2) Eir ind <40, slight risk; 40-80, medium risk; 80-160, relatively high risk; 160-320, high risk; ≥320, extrem high risk.RI index: <150, slight risk; 150-300, medium risk; 300-600, relatively high risk; 600-12 high risk; ≥1200, extremely high risk. Generally, the average RI index is arranged in descending order: M > CA > SZ ≈ T SR, consistent with the previous analysis. Except for Pb, arsenic and other heavy met are mostly in the range of slight risk to medium risk (12~20%), and 20% and 16.7% of sediments in SZ and SB rivers were at high risk. However, the sediments above high r all appeared in TC River and the mining area (M), accounting for 37.5% and 100% of sediments in their respective water systems, respectively. Comparison between This Study and Sediments around Other Gold Mining Are The average concentration, Igeo and RI values of As, Sb, Zn and Cd in the sedime Figure 6. Comparison of Igeo, Eir and RI of As and HMs in sediments of gold mine area and four karst rivers. (1) Igeo index: level 0, representing no pollution; Grade I, representing no to moderate pollution; Grade II, representing medium pollution; Grade III, representing moderate to relatively high pollution; Grade IV, representing relatively high pollution to high pollution; Grade V, representing high pollution to extreme pollution; Grade VI, representing extreme pollution. (2) Eir index: <40, slight risk; 40-80, medium risk; 80-160, relatively high risk; 160-320, high risk; ≥320, extremely high risk.RI index: <150, slight risk; 150-300, medium risk; 300-600, relatively high risk; 600-1200, high risk; ≥1200, extremely high risk. The average concentrations of As and HMs in the sediments of most references are lower than those in our study. Individual average values, including As, are extremely low relative to local A, possibly owing to their different gold ore types or mining methods because the high concentration of As in this area is concentrated near the mining area. Sb is lower than local K. Zn is lower than local K and E, and the reason for Zn being lower than local E is the same as above. Cd is lower than local E, I and J. For Igeo and RI values, Zn in other studies is lower than that in our study. On the contrary, Cd is mostly higher than that in our research. However, there are two evaluation criteria in these cities: (1) A, B, D, K and M were evaluated by statistical methods or local background values. The average concentrations of As and HMs in the sediments of most references are lower than those in our study. Individual average values, including As, are extremely low relative to local A, possibly owing to their different gold ore types or mining methods because the high concentration of As in this area is concentrated near the mining area. Sb is lower than local K. Zn is lower than local K and E, and the reason for Zn being lower than local E is the same as above. Cd is lower than local E, I and J. For Igeo and RI values, Zn in other studies is lower than that in our study. On the contrary, Cd is mostly higher than that in our research. However, there are two evaluation criteria in these cities: (1) A, B, D, K and M were evaluated by statistical methods or local background values. (2) C, F, G, H and L were evaluated by the geochemical background value of the crust (UCC). Most of the Igeo and RI values higher than those in our study were evaluated using UCC as the background value, except for area D. Notably, the background value of UCC was far lower than that of this study. This means that the pollution and ecological risks of arsenic Figure 7. Comparison between the sediments in this study and surface sediments around other gold deposits. Dashed line: represents the value with the largest gap higher than this study's. Red bold value represents the value exceeding this study. /-Represents missing values in references. A-Orbiel valley (France) [43]. B-Zhaosu River catchment (China) [40]. C-Lom River (Adamawa Cameroon) [64]. D-Anka Gold Mine (Nigeria) [42]. E-Tajum River (Indonesia) [65]. F-Afema Gold Mine (Côte d'Ivoire). G-Agbaou Gold Mine (Côte d'Ivoire). H-Bonikro Gold Mine (Côte d'Ivoire) [41]. I-Gold mine (Gold city, Nigeria) [66]. J-the Black Hills (South Dakota) [67]. K-Santurbán paramo (Colombia) [68]. L-Itapicuru-Mirim River (Brazil) [69]. M-Gold mine (Kesennuma City, Japan) [70]. Conclusions This study analyzed the accumulation, sources and impact factors and assessed their pollution and ecological risks of As and HMs in the river sediments around a gold mine in southwest China, and obtained some important results: (1) Compared with Cr, Co, Ni, Cu and Pb, As, Sb, Zn and Cd were affected by gold mining, and their accumulation degree and chemical activity were relatively high. Among them, the sediments with high As and Sb accumulation were mainly concentrated in the gold mine area (M). The cumulative concentrations of As, Sb, Zn and Cd in the sediments of the SZ, HS and TC karst rivers were higher than those of the SB karst rivers. (2) Gold mining is the primary source of As, Sb, Zn and Cd accumulation in sediments, accounting for 40.6%, 47.3%, 41.2% and 44.2%, respectively. In addition, water flow transport, local lithology, the proportion of F1 fraction of elements, and pH are influencing factors that cannot be ignored. (3) In addition to Pb, arsenic and other heavy metals have reached a slight to medium level of pollution and risk. However, As and Sb are the most serious. Their Igeo and Eir values are above serious pollution and medium pollution, respectively, and both meet extremely high-risk standards, mainly the sediments from the mining area (M) and TC River. The research result is of great significance for preventing and controlling the sediment pollution of arsenic and heavy metals near the same type of gold mining area. In addition, other influencing factors, such as local lithology and weak acid distillate, should be noted. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ijerph20021432/s1, Table S1. As and HMs concentration (mg/kg), pH value in sediment samples, and lithology of the area. Table S2. Statistical results of concentration (mg/kg) and pH of As and HMs in sediment samples and their corresponding BV. Table S3. Pearson correlation matrix of As and HMs in sediments. Institutional Review Board Statement: This work does not involve any hazards, such as the use of animal or human subjects. There is no plagiarism in our research or any data, articles or theories of others. Informed Consent Statement: This paper has not been and will not be submitted simultaneously to other journals. The paper is an entirely original work conducted by us without copying or plagiarism issues. The information reported in the paper is accurate to the best of our knowledge. A single study was not split into several parts to increase the number of submissions and be submitted to various journals or one journal over time. We consent to publish. Data Availability Statement: Some or all data, models, or code that supports the findings of this study is available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
7,337.4
2023-01-01T00:00:00.000
[ "Environmental Science", "Geology" ]
Optimal Decision Rules in Repeated Games Where Players Infer an Opponent’s Mind via Simplified Belief Calculation : In strategic situations, humans infer the state of mind of others, e.g., emotions or intentions, adapting their behavior appropriately. Nonetheless, evolutionary studies of cooperation typically focus only on reaction norms, e.g., tit for tat, whereby individuals make their next decisions by only considering the observed outcome rather than focusing on their opponent’s state of mind. In this paper, we analyze repeated two-player games in which players explicitly infer their opponent’s unobservable state of mind. Using Markov decision processes, we investigate optimal decision rules and their performance in cooperation. The state-of-mind inference requires Bayesian belief calculations, which is computationally intensive. We therefore study two models in which players simplify these belief calculations. In Model 1, players adopt a heuristic to approximately infer their opponent’s state of mind, whereas in Model 2, players use information regarding their opponent’s previous state of mind, obtained from external evidence, e.g., emotional signals. We show that players in both models reach almost optimal behavior through commitment-like decision rules by which players are committed to selecting the same action regardless of their opponent’s behavior. These commitment-like decision rules can enhance or reduce cooperation depending on the opponent’s strategy. Introduction Although evolution and rationality apparently favor selfishness, animals, including humans, often form cooperative relationships, each participant paying a cost to help one another.It is therefore a universal concern in biological and social sciences to understand what mechanism promotes cooperation.If individuals are in a kinship, kin selection fosters their cooperation via inclusive fitness benefits [1,2].If individuals are non-kin, establishing cooperation between them is a more difficult problem.Studies of the Prisoner's Dilemma (PD) game and its variants have revealed that repeated interactions between a fixed pair of individuals facilitates cooperation via direct reciprocity [3][4][5].A well-known example of this reciprocal strategy is Tit For Tat (TFT), whereby a player cooperates with the player's opponent only if the opponent has cooperated in a previous stage.If one's opponent obeys TFT, it is better to cooperate because in the next stage, the opponent will cooperate, and the cooperative interaction continues; otherwise, the opponent will not cooperate, and one's total future payoff will decrease.Numerous experimental studies have shown that humans cooperate in repeated PD games if the likelihood of future stages is sufficiently large [6]. In evolutionary dynamics, TFT is a catalyst for increasing the frequency of cooperative players, though it is not evolutionarily stable [7].Some variants of TFT, however, are evolutionarily stable; Win Stay Lose Shift (WSLS) is one such example in which a player cooperates with the player's opponent only if the outcome of the previous stage of the game has been mutual cooperation or mutual defection [8].TFT and WSLS are instances of so-called reaction norms in which a player selects an action as a reaction to the outcome of the previous stage, i.e., the previous pair of actions selected by the player and the opponent [9].In two-player games, a reaction norm is specified by conditional probability p(a|a , r ) by which a player selects next action a depending on the previous actions of the player and the opponent, i.e., a and r , respectively. Studies of cooperation in repeated games typically assume reaction norms as the subject of evolution.A problem with this assumption is that it describes behavior as a black box in which an action is merely a mechanical response to the previous outcome; however, humans and, controversially, non-human primates have a theory of mind in which they infer the state of mind (i.e., emotions or intentions) of others and use these inferences as pivotal pieces of information in their own decision-making processes [10,11].As an example, people tend to cooperate more when they are cognizant of another's good intentions [6,12].Moreover, neurological bases of intention or emotion recognition have been found [13][14][15][16][17]. Despite the behavioral and neurological evidence, there is still a need for a theoretical understanding of the role of such state-of-mind recognition in cooperation; to the best of our knowledge, only a few studies have focused on examining the interplay between state-of-mind recognition and cooperation [18,19]. From the viewpoint of state-of-mind recognition, the above reaction norm can be decomposed as: p(a|a , r ) = ∑ s p(a|s)p(s|a , r ) where s represents the opponent's state of mind.Equation ( 1) contains two modules.The first module, p(s|a , r ), handles the state-of-mind recognition; given observed previous actions a and r , a player infers that the player's opponent is in state s with probability p(s|a , r ), i.e., a belief, and thinks that the opponent will select some action depending on this state s.The second module, p(a|s), controls the player's decision-making; the player selects action a with probability p(a|s), which is a reaction to the inferred state of mind of opponent s.In our present study, we are motivated to clarify what decision rule, i.e., the second module, is plausible and how it behaves in cooperation when a player infers an opponent's state of mind via the first module.To do so, we use Markov Decision Processes (MDPs) that provide a powerful framework for predicting optimal behavior in repeated games when players are forward-looking [20,21].MDPs even predict (pure) evolutionarily stable states in evolutionary game theory [22].The core of MDPs is the Bellman Optimality Equation (BOE); by solving the BOE, a player obtains the optimal decision rule, which is called the optimal policy, that maximizes the player's total future payoff.Solving a BOE with beliefs, however, requires complex calculations and is therefore computationally expensive.Rather than solving the BOE naively, we instead introduce approximations of the belief calculation that we believe to be more biologically realistic.We introduce two models to do so and examine the possibility of achieving cooperation as compared to a null model (introduced in Section 2.2.1) in which a player directly observes an opponent's state of mind.In the first model, we assume that a player believes that an opponent's behavior is deterministic such that the opponent's actions are directly (i.e., one-to-one) related to the opponent's states.A rationale for this approximation is that in many complex problems, people use simple heuristics to make a fast decision, for example a rough estimation of an uncertain quantity [23,24].In the second model, we assume that a player correctly senses an opponent's previous state of mind, although the player does not know the present state of mind.This assumption could be based on some external clue provided by emotional signaling, such as facial expressions [25].We provide the details of both models in Section 2.2.2. Analysis Methods We analyze an MDP of an infinitely repeated two-player game in which an agent selects optimal actions in response to an opponent, who behaves according to a reaction norm and has an unobservable state of mind.For the opponent's behavior, we focus on four major reaction norms, i.e., Contrite TFT (CTFT), TFT, WSLS and Grim Trigger (GRIM), all of which are stable and/or cooperative strategies [4,8,[26][27][28][29].These reaction norms can be modeled using Probabilistic Finite-State Machines (PFSMs). Model For each stage game, the agent and opponent select actions, these actions being either cooperation (C) or defection (D).We denote the set of possible actions of both players by A = {C, D}.When the agent selects action a ∈ A and the opponent selects action r ∈ A, the agent gains stage-game payoff f (a, r).The payoff matrix of the stage game is given by: where the four outcomes, i.e., mutual cooperation ((the agent's action, the opponent's action) = (C, C)), one-sided cooperation ((C, D)), one-sided defection ((D, C)) and mutual defection ( Depending on S and T, the payoff matrix (2) yields different classes of the stage game.If 0 < S < 1 and 0 < T < 1, it yields the Harmony Game (HG), which has a unique Nash equilibrium of mutual cooperation.If S < 0 and T > 1, it yields the PD game, which has a unique Nash equilibrium of mutual defection. If S < 0 and 0 < T < 1, it yields the Stag Hunt (SH) game, which has two pure Nash equilibria, one being mutual cooperation, the other mutual defection.If S > 0 and T > 1, it yields the Snowdrift Game (SG), which has a mixed strategy Nash equilibrium with both mutual cooperation and defection being unstable.Given the payoff matrix, the agent's purpose at each stage t is to maximize the agent's expected discounted total payoff E [∑ ∞ τ=0 β τ f (a t+τ , r t+τ )], where a t+τ and r t+τ are the actions selected by the agent and opponent at stage t + τ, respectively, and β ∈ [0, 1) is a discount rate.The opponent's behavior is represented by a PFSM, which is specified via probability distributions φ and w.At each stage t, the opponent is in some state s t ∈ S and selects action r t with probability φ(r t |s t ).Next, the opponent's state changes to next state s t+1 ∈ S with probability w(s t+1 |a t , s t ).We study four types of two-state PFSMs as the opponent's model, these being Contrite Tit for Tat (CTFT), Tit for Tat (TFT), Win Stay Lose Shift (WSLS) and Grim Trigger (GRIM).We illustrate the four types of PFSMs in Figure 1 and list all probabilities φ and w in Table 1.Here, the opponent's state is either Happy (H) or Unhappy (U), i.e., S = {H, U}.Note that H and U are merely labels for these states.An opponent obeying the PSFMs selects action C in state H and selects action D in state U with a stochastic error, i.e., φ(C|H) = 1 − (hence, φ(D|H) = ) and φ(C|U) = (hence, φ(D|U) = 1 − ), where > 0 is a small probability with which the opponent fails to select an intended action.For all four of the PFSMs, the , U) Bellman Optimality Equations Because the repeated game we consider is Markovian, the optimal decision rules or policies are obtained by solving the appropriate BOEs.Here, we introduce three different BOEs for the repeated two-player games assuming complete (see Section 2.2.1) and incomplete (see Section 2.2.2) information about the opponent's state.Table 2 summarizes available information about the opponent in these three models.In our first scenario, which we call Model 0, we assume that the agent knows the opponent's present state, as well as which PFSM the opponent obeys, i.e., φ and w.At stage t, the agent selects sequence of actions {a t+τ } ∞ τ=0 to maximize expected discounted total payoff where E s t is the expectation conditioned on the opponent's present state s t .Let the value of state s, V(s), be the maximum expected discounted total payoff the agent expects to obtain when the opponent's present state is s, given that the agent obeys the optimal policy and, thus, selects optimal actions in the following stage games.Here, V(s t ) is represented by: which has recursive relationship: The BOE when the opponent's present state is known is therefore represented as: where we rewrite V as V (0) for later convenience.Equation (5) reads that if the agent obeys the optimal policy, the value of having the opponent in state s t (i.e., the left-hand side) is the sum of the expected immediate reward when the opponent is in state s t and the expected value, discounted by β, of having the opponent in the next state s t+1 (i.e., the right-hand side).Note that the time subscripts in the BOEs being derived hereafter (i.e., Equations ( 5), ( 10) and ( 12)) can be omitted because they hold true for any game stages, i.e., for any t. Incomplete Information Cases (Models 1 and 2) For the next two models, we assume that the agent does not know the opponent's state of mind, even though the agent knows which PFSM the opponent obeys, i.e., the agent knows the opponent's φ and w.The agent believes that the opponent's state at stage t is s t with probability b t (s t ), which is called a belief.Mathematically, a belief is a probability distribution over the state space.At stage t + 1, b t is updated to b t+1 in a Markovian manner depending on information available at the previous stage.The agent maximizes expected discounted total payoff where E b t is the expectation based on present belief b t .Using the same approach as that of Section 2.2.1 above, the value function, denoted by V(b t ), when the agent has belief b t regarding opponent's state s t has recursive relationship: The BOE when the opponent's present state is unknown is then: where b t+1 is the belief at the next stage.Equation (7) reads that if the agent obeys the optimal policy, the value of having belief b t regarding the opponent's present state s t (i.e., the left-hand side) is the expected (by belief b t ) sum of the immediate reward and the value, discounted by β, of having next belief b t+1 regarding the opponent's next state s t+1 (i.e., the right-hand side). We can consider various approaches to updating the belief in Equation ( 7).One approach is to use Bayes' rule, which then is called the belief MDP [30].After observing actions a t and r t in the present stage, the belief is updated from b t to b t+1 as: Equation ( 8) is simply derived from Bayes' rule as follows: (i) the numerator (i.e., Prob(r t , s t+1 |b t , a t )) is the joint probability that the opponent's present action r t and next state s t+1 are observed, given the agent's present belief b t and action a t ; and (ii) the denominator (i.e., Prob(r t |b t )) is the probability that r t is observed, given the agent's present belief b t .Finding an optimal policy via the belief MDP is unfortunately difficult, because there are infinitely many beliefs, and the agent must simultaneously solve an infinite number of Equation (7).To overcome this problem, a number of computational approximation methods have been proposed, including grid-based discretization and particle filtering [31,32].When one views the belief MDP as a biological model for decision-making processes, these computational approximations might likely be inapplicable, because animals, including humans, tend to employ more simplified practices rather than complex statistical learning methods [24,33,34].We explore such possibilities in the two models below. A Simplification Heuristic (Model 1) In Model 1, we assume that the agent simplifies the opponent's behavioral model in the agent's mind by believing that the opponent's state-dependent action selection is deterministic; we replace φ(r|s) in Equation ( 8) with δ r,σ(s) , where δ is Kronecker's delta (i.e., it is one if r = σ(s) and zero otherwise).Here, σ is a bijection that determines the opponent's action r depending on the opponent's present state s, which we define as σ(H) = C and σ(U) = D. Using this simplification heuristic, Equation ( 8) is greatly reduced to: where σ −1 is an inverse map of σ from actions to states.In Equation ( 9), the agent infers that the opponent's state changes to s t+1 , because the agent previously selected action a t and the opponent was definitely in state σ −1 (r t ).Applying a time-shifted Equation (9) to Equation (7), we obtain the BOE that the value of previous outcome (a t−1 , r t−1 ) should satisfy, i.e., where we rewrite V(w( . Here, w represents the belief regarding the opponent's present action r t , which is approximated by the agent given previous outcome (a t−1 , r t−1 ).Equation (10) reads that if the agent obeys the optimal policy, the value of having observed previous outcome (a t−1 , r t−1 ) (i.e., the left-hand side) is the expected (by approximate belief w) sum of the immediate reward and the value, discounted by β, of observing present outcome (a t , r t ) (i.e., the right-hand side). Use of External Information (Model 2) In Model 2, we assume that after the two players decide actions a t and r t in a game stage (now at time t + 1), the agent comes to know or correctly infers the opponent's previous state ŝt by using external information.More specifically, b t (s t ) in Equation ( 8) is replaced by δ ŝt ,s t .In this case, Equation ( 8) is reduced to: b t+1 (s t+1 ) = w(s t+1 |a t , ŝt ) Applying a time-shifted Equation (11) to Equation (7), we obtain the BOE that the value of the previous pair comprised of the agent's action a t−1 and the opponent's (inferred) state ŝt−1 should satisfy, i.e., where we rewrite V(w(•|a t , ŝt )) as V (2) (a t , ŝt ).Because we assume that the previous state inference is correct, ŝt−1 and ŝt in Equation ( 12) can be replaced by s t−1 and s t , respectively.Equation ( 12) then reads that if the agent obeys the optimal policy, the value of having observed the agent's previous action a t−1 and knowing the opponent's previous state s t−1 (i.e., the left-hand side) is the expected (by state transition distribution w) sum of the immediate reward and the value, discounted by β, of observing the agent's present action a t and getting to know the opponent's present state s t (i.e., the right-hand side). Conditions for Optimality and Cooperation Frequencies Overall, we are interested in the optimal policy against a given opponent, but identifying such a policy depends on the payoff structure, i.e., Equation (2).We follow the procedure below to search for a payoff structure that yields an optimal policy. Using the obtained value function, determine the payoff conditions under which the policy is consistent with the value function, i.e., the policy is actually optimal. Figure 2 illustrates how each model's policy uses available pieces of information.In Appendix A, we describe in detail how to calculate the value functions and payoff conditions for each model. (1depends on the agent's previous action a and the opponent's previous action r and (c) policy π (2) depends on the agent's previous action a and the opponent's previous state s .The open solid circles represent the opponent's known states, whereas dotted circles represent the opponent's unknown states.Black arrows represent probabilistic dependencies of the opponent's decisions and state transitions. Next, using the obtained optimal policies, we study to what extent an agent obeying the selected optimal policy cooperates in the repeated game when we assume a model comprising incomplete information (i.e., Models 1 and 2) versus when we assume a model comprising complete information (i.e., Model 0).To do so, we consider an agent and an opponent playing an infinitely repeated game.In the game, both the agent and opponent fail in selecting the optimal action with probabilities ν and , respectively.After a sufficiently large number of stages, the distribution of the states and actions of the two players converges to a stationary distribution.As described in Appendix B, we measure the frequency of the agent's cooperation in the stationary distribution. Following the above procedure, all combinations of models, opponent types and optimal policies can be solved straightforwardly, but such proofs are too lengthy to show here; thus, in Appendix C, we demonstrate just one case in which policy CDDD (see Section 3) is optimal against a GRIM opponent in Model 2. Results Before presenting our results, we introduce a short hand notation to represent policies for each model.In Model 0, a policy is represented by character sequence a H a U , where a s = π (0) (s) is the optimal action if the opponent is in state s ∈ S. Model 0 has at most four possible policies, namely CC, CD, DC and DD.Policies CC and DD are unconditional cooperation and unconditional defection, respectively.With policy CD, an agent behaves in a reciprocal manner in response to an opponent's present state; more specifically, the agent cooperates with an opponent in state H, hence the opponent cooperating at the present stage, and defects against an opponent in state U, hence the opponent defecting at the present stage.Policy DC is an asocial variant of policy CD: an agent obeying policy DC defects against an opponent in state H and cooperates with an opponent in state U.We call policy CD anticipation and policy DC asocial-anticipation.In Model 1, a policy is represented by four-letter sequence a CC a CD a DC a DD , where a a r = π (1) (a , r ) is the optimal action, with the agent's and opponent's selected actions a and r at the previous stage.In Model 2, a policy is represented by four-letter sequence a CH a CU a DH a DU , where a a s = π (2) (a , s ) is the optimal action, with the agent's selected action a and the opponent's state s at the previous stage.Models 1 and 2 each have at most sixteen possible policies, ranging from unconditional cooperation (CCCC) to unconditional defection (DDDD). For each model, we identify four classes of optimal policy, i.e., unconditional cooperation, anticipation, asocial-anticipation and unconditional defection.Figure 3 shows under which payoff conditions each of these policies are optimal, with a comprehensive description for each panel given in Appendix D. An agent obeying unconditional cooperation (i.e., CC in Model 0 or CCCC in Models 1 and 2, colored blue in the figure) or unconditional defection (i.e., DD in Model 0 or DDDD in Models 1 and 2, colored red in the figure) always cooperates or defects, respectively, regardless of an opponent's state of mind.An agent obeying anticipation (i.e., CD in Model 0, CCDC against CTFT, CCDD against TFT, CDDC against WSLS or CDDD against GRIM in Models 1 and 2, colored green in the figure) conditionally cooperates with an opponent only if the agent knows or guesses that the opponent has a will to cooperate, i.e., the opponent is in state H.As an example, in Model 0, an agent obeying policy CD knows an opponent's current state, cooperating when the opponent is in state H and defecting when in state U.In Models 1 and 2, an agent obeying policy CDDC guesses that an opponent is in state H only if the previous outcome is (C, C) or (D, D), because the opponent obeys WSLS.Since the agent cooperates only if the agent guesses that the opponent is in state H, it is clear that anticipation against WSLS is CDDC.Finally, an agent obeying asocial-anticipation (i.e., DC in Model 0, DDCD against CTFT, DDCC against TFT, DCCD against WSLS or DCCC against GRIM in Models 1 and 2, colored yellow in the figure) behaves in the opposite way to anticipation; more specifically, the agent conditionally cooperates with an opponent only if the agent guesses that the opponent is in state U.This behavior increases the number of outcomes of (C, D) or (D, C), which induces the agent's payoff in SG. The boundaries that separate the four optimal policy classes are qualitatively the same for Models 0, 1 and 2, which is evident by comparing them column by column in Figure 3, although they are slightly affected by the opponent's errors, i.e., and µ, in different ways.These boundaries become identical for the three models in the error-free limit (see Table 5 and Appendix E).This similarity between models indicates that an agent using a heuristic or an external clue to guess an opponent's state (i.e., Models 1 and 2) succeeds in selecting appropriate policies, as well as an agent that knows an opponent's exact state of mind (i.e., Model 0).To better understand the effects of the errors here, we show the analytical expressions of the boundaries in a one-parameter PD in Appendix F. Here, the opponent obeys (a-d) CTFT, (e-h) TFT, (i-l) WSLS and (m-p) GRIM.(a,e,i,m) Error-free cases ( = µ = ν = 0) of complete and incomplete information (common to Models 0, 1 and 2).(b,f,j,n) Error-prone cases ( = µ = ν = 0.1) of complete information (i.e., Model 0).(c,g,k,o) Error-prone cases ( = µ = ν = 0.1) of incomplete information (i.e., Model 1).(c,g,k,o) Error-prone cases ( = µ = ν = 0.1) of incomplete information (i.e., Model 2).Horizontal and vertical axes represent payoffs for one-sided defection, T, and one-sided cooperation, S, respectively.In each panel, Harmony Game (HG), Snowdrift Game (SG), Stag Hunt (SH) and Prisoner's Dilemma (PD) indicate the regions of these specific games.We set parameter β = 0.8. Although the payoff conditions for the optimal policies are rather similar across the three models, the frequency of cooperation varies.Figure 4 shows the frequencies of cooperation in infinitely repeated games, with analytical results summarized in Table 3 and a comprehensive description of each panel presented in Appendix G. Hereafter, we focus on the cases of anticipation since it is the most interesting policy class we wish to understand.In Model 0, an agent obeying anticipation cooperates with probability 1 − µ − 2ν when playing against a CTFT or WSLS opponent, with probability 1/2 when playing against a TFT opponent and with probability (µ + ν 2 (1 − 2µ))/(2µ + ν(1 − 2µ)) when playing against a GRIM opponent, where µ and ν are probabilities of error in the opponent's state transition and the agent's action selection, respectively.To better understand the effects of errors, these cooperation frequencies are expanded by the errors except for in the GRIM case.In all Model 0 cases, the error in the opponent's action selection, , is irrelevant, because in Model 0, the agent does not need to infer the opponent's present state through the opponent's action.Interestingly, in Models 1 and 2, an agent obeying anticipation cooperates with a CTFT opponent with probability 1 − 2ν, regardless of the opponent's error µ.This phenomenon occurs because of the agent's interesting policy CCDC, which prescribes selecting action C if the agent self-selected action C in the previous stage; once the agent selects action C, the agent continues to try to select C until the agent fails to do so with a small probability ν.This can be interpreted as a commitment strategy to bind oneself to cooperation.In this case, the commitment strategy leads to better cooperation than that of the agent knowing the opponent's true state of mind; the former yields frequency of cooperation 1 − 2ν and the latter 1 − µ − 2ν.A similar commitment strategy (i.e., CCDD) appears when the opponent obeys TFT; here, an agent obeying CCDD continues to try to select C or D once action C or D, respectively, is self-selected.In this case, however, partial cooperation is achieved in all models; here, the frequency of cooperation by the agent is 1/2.When the opponent obeys WSLS, the frequency of cooperation by the anticipating agent in Model 2 is the same as in Model 0, i.e., 1 − µ − 2ν.In contrast, in Model 1, the frequency of cooperation is reduced by 2 to 1 − 2 − µ − 2ν.In Model This phenomenon again occurs due to a commitment-like aspect of the agent's policy, i.e., CDDD; once the agent selects action D, the agent continues to try to defect for a long time. Discussion and Conclusion In this paper, we analyzed two models of repeated games in which an agent uses a heuristic or additional information to infer an opponent's state of mind, i.e., the opponent's emotions or intentions, then adopts a decision rule that maximizes the agent's expected long-term payoff.In Model 1, the agent believes that the opponent's action-selection is deterministic in terms of the opponent's present state of mind, whereas in Model 2, the agent knows or correctly recognizes the opponent's state of mind at the previous stage.For all models, we found four classes of optimal policies.Compared to the null model (i.e., Model 0) in which the agent knows the opponent's present state of mind, the two models establish cooperation almost equivalently except when playing against a GRIM opponent (see Table 3).In contrast to the reciprocator in the classical framework of the reaction norm, which reciprocates an opponent's previous action, we found the anticipator that infers an opponent's present state and selects an action appropriately.Some of these anticipators show commitment-like behaviors; more specifically, once an anticipator selects an action, the anticipator repeatedly selects that action regardless of an opponent's behavior.Compared to Model 0, these commitment-like behaviors enhance cooperation with a CTFT opponent in Model 2 and diminish cooperation with a GRIM opponent in Models 1 and 2. Why can the commitment-like behaviors be optimal?For example, after selecting action C against a CTFT opponent, regardless of whether the opponent was in state H or U at the previous stage, the opponent will very likely move to state H and select action C. Therefore, it is worthwhile to believe that after selecting action C, the opponent is in state H, and thus, it is good to select action C again.Next, it is again worthwhile to believe that the opponent is in state H and good to select action C, and so forth.In this way, if selecting an action always yields a belief in which selecting the same action is optimal, it is commitment-like behavior.In our present study, particular opponent types (i.e., CTFT, TFT and GRIM) allow such self-sustaining action-belief chains, and this is why commitment-like behaviors emerge as optimal decision rules. In general, our models depict repeated games in which the state changes stochastically.Repeated games with an observable state have been studied for decades in economics (see, e.g., [35,36]); however, if the state is unobservable, the problem becomes a belief MDP.In this case, Yamamoto showed that with some constraints, some combination of decision rules and beliefs can form a sequential equilibrium in the limit of a fully long-sighted future discount, i.e., a folk theorem [21].In our present work, we have not investigated equilibria, instead studying what decision rules are optimal against some representative finite-state machines and to what extent they cooperate.Even so, we can speculate on what decision rules form equilibria as follows. In the error-free limit, the opponent's states and actions have a one-to-one relationship, i.e., H to C and U to D. Thus, the state transitions of a PFSM can be denoted as s CH s CU s DH s DU , where s a s is the opponent's next state when the agent's previous action was a and the opponent's previous state was s .Using this notation, the state transitions of GRIM and WSLS can be denoted by HUUU and HUUH, respectively.Given this, in s a ,s , we can rewrite the opponent's present state s with present action r and previous state s with previous action r by using the one-to-one correspondence between states and actions in the error-free limit.Moreover, from the opponent's viewpoint, a in s a ,s can be rewritten as s , which is the agent's pseudo state; here, because of the one-to-one relationship, the agent appears as if the agent had a state in the eyes of the opponent.In short, we can rewrite s a ,s as r r ,a in Model 1 and as r r ,s in Model 2, where we flip the order of the subscripts.This rewriting leads HUUU and HUUH to CDDD and CDDC, respectively, which are part of the optimal decision rules when playing against GRIM and WSLS; GRIM and WSLS can be optimal when playing against themselves depending on the payoff structure.The above interpretation suggests that some finite-state machines, including GRIM and WSLS, would form equilibria in which a machine and a corresponding decision rule, which infers the machine's state of mind and maximizes the payoff when playing against the machine, behave in the same manner. Our models assume an approximate heuristic or ability to use external information to analytically solve the belief MDP problem, which can also be numerically solved using the Partially-Observable Markov Decision Process (POMDP) [32].Kandori and Obara introduced a general framework to apply the POMDP to repeated games of private monitoring [20].They assumed that the actions of players are not observable, but rather players observe a stochastic signal that informs them about their actions.In contrast, we assumed that the actions of players are perfectly observable, but the states of players are not observable.Kandori and Obara showed that in an example of PD with a fixed payoff structure, grim trigger and unconditional defection are equilibrium decision rules depending on initial beliefs.We showed that in PD, CDDD and DDDD decision rules in Models 1 and 2 are optimal against a GRIM opponent in a broad region of the payoff space, suggesting that their POMDP approach and our approach yield similar results if the opponent is sufficiently close to some representative finite-state machines. Nowak, Sigmund and El-Sedy performed an exhaustive analysis of evolutionary dynamics in which two-state automata play 2 × 2-strategy repeated games [37].The two-state automata used in their study are the same as the PFSMs used in our present study if we set = 0 in the PFSMs, i.e., if we consider that actions selected by a PFSM completely correspond with its states.Thus, their automata do not consider unobservable states.They comprehensively studied average payoffs for all combinations of plays between the two-state automata in the noise-free limit.Conversely, we studied optimal policies when playing against several major two-state PFSMs that have unobservable states by using simplified belief calculations. In the context of the evolution of cooperation, there have been a few studies that examined the role of state-of-mind recognition.Anh, Pereira and Santos studied a finite population model of evolutionary game dynamics in which they added a strategy of Intention Recognition (IR) to the classical repeated PD framework [18].In their model, the IR player exclusively cooperates with an opponent that has an intention to cooperate, inferred by calculating its posterior probability using information from previous interactions.They showed that the IR strategy, as well as TFT and WSLS, can prevail in the finite population and promote cooperation.There are two major differences between their model and ours.First, their IR strategists assume that an individual has a fixed intention either to cooperate or to defect, meaning that their IR strategy only handles one-state machines that always intend to do the same thing (e.g., unconditional cooperators and unconditional defectors).In contrast, our model can potentially handle any multiple-state machines that intend to do different things depending on the context (e.g., TFT and WSLS).Second, they examined the evolutionary dynamics of their IR strategy, whereas we examined the state-of-mind recognizer's static performance of cooperation when using the optimal decision rule against an opponent. Our present work is just a first step to understanding the role of state-of-mind recognition in game theoretic situations, thus further studies are needed.For example, as stated above, an equilibrium established between a machine that has a state and a decision rule that cares about the machine's state could be called 'theory of mind' equilibrium.A thorough search for equilibria here is necessary.Moreover, although we assume it in our present work, it is unlikely that a player knows an opponent's parameters, i.e., φ and w.An analysis of models in which a player must infer an opponent's parameters and state would be more realistic and practical.Further, our present study is restricted to a static analysis.The co-evolution of the decision rule and state-of-mind recognition in evolutionary game dynamics has yet to be investigated.Agent's belief regarding the opponent's state at stage t, where s is the opponent's state at stage t V (i) Value function in Model i (= 0, 1 or 2) Agent's optimal policy in Model i (= 0, 1 or 2) p (i) (a, s) Stationary joint distribution of the agent's action a and the opponent's state s in Model i (= 0, 1 or 2) Frequency of cooperation by the agent in Model i (= 0, 1 or 2) Table 5. Optimal policies and their conditions for optimality (error-free limit). Opponent Policy Condition for Optimality (CD and its variants, depending on opponent type) are optimal in HG and SH, respectively.With long-sighted future discounts, the regions in which either of them is optimal (i.e., the blue or green regions) broaden and can be optimal in SG or PD. If the opponent obeys CTFT (see the CTFT row of Table 5 and Figure 3a for the error-free limit), unconditional cooperation (CC or CCCC) can be optimal in HG and some SG; further, anticipation (CD or CCDC) can be optimal in SH and some PD.Numerically-obtained optimal policies in the error-prone case are shown in Figure 3b-d ( , µ, ν = 0.1).With a small error, the regions in which the policies are optimal slightly change from the error-free case.With a fully-long-sighted future discount (i.e., β → 1), the four policies that can be optimal when β < 1 can be optimal (see the CTFT row, β → 1 column of Table 5). If the opponent obeys TFT (see the TFT row of Table 5 and Figure 3e for the error-free limit), unconditional cooperation (CC or CCCC) can be optimal in all four games (i.e., HG, some SG, SH and some PD), while asocial-anticipation (DC or DDCC) can be optimal in some SH and some PD, but the region in which anticipation is optimal falls outside of the drawing area (i.e., −1 < S < 1 and 0 < T < 2) in Figure 3e.Numerically-obtained optimal policies in the error-prone case are shown in Figure 3f-h ( , µ, ν = 0.1).With a fully-long-sighted future discount (i.e., β → 1), among the four policies that can be optimal when β < 1, only unconditional cooperation (CC or CCCC) and asocial-anticipation (DC or DDCC) can be optimal (see the TFT row, β → 1 column of Table 5). If the opponent obeys WSLS (see the WSLS row of Table 5 and Figure 3i for the error-free limit), unconditional cooperation (CC or CCCC) can be optimal in some HG and some SG, and anticipation (CD or CDDC) can be optimal in all four games (some HG, some SG, SH and some PD).Numerically-obtained optimal policies in the error-prone case are shown in Figure 3j-l ( , µ, ν = 0.1).With a fully-long-sighted future discount (i.e., β → 1), among the four policies that are optimal when β < 1, only anticipation (CD or CDDC) and unconditional defection (DD or DDDD) can be optimal (see the WSLS row, β → 1 column of Table 5). If the opponent obeys GRIM (see the GRIM row of Table 5 and Figure 3m for the error-free limit), unconditional cooperation (CC or CCCC) can be optimal in some HG and some SG and anticipation (CD or CDDD) can be optimal in SH and PD.Numerically-obtained optimal policies in the error-prone case are shown in Figure 3n-p ( , µ, ν = 0.1).With a fully-long-sighted future discount (i.e., β → 1), among the four policies that are optimal when β < 1, only unconditional cooperation (CC or CCCC) and anticipation (CD or CDDD) can be optimal (see the GRIM row, β → 1 column of Table 5). Appendix E. Isomorphism of the BOEs in the Error-Free Limit In the error-free limit, an opponent's action selection and state transition are deterministic; i.e., using maps σ and ψ, we can write r t = σ(s t ) and s t+1 = ψ(a t , s t ) for any stage t.Thus, Equation (5) becomes: V (0) (s) = max a f (a, σ(s)) + βV (0) (ψ(a, s)) Similarly, Equations ( 10) and ( 12) become: V (1) (a , σ(s )) = max a f (a, σ(s)) + βV (1) (a, σ(s)) (36) and: V (2) (a , s ) = max a f (a, σ(s)) + βV (2) (a, s) respectively, where s = ψ(a , s ).Each right-hand side of Equations ( 36) and ( 37) depends only on s, thus using some v, we can rewrite V (1) (a , σ(s )) and V (2) (a , s ) as v(s) = v(ψ(a , s )).This means that the optimal policies obtained from Equations ( 36) and ( 37) are isomorphic to those obtained from Equation (35) in the sense that corresponding optimal policies have an identical condition for optimality.As an example, policy CC in Model 0 and policy CCCC in Models 1 and 2 are optimal against a CTFT opponent under identical conditions, S > 0 ∧ T < 1 + β(1 − S) (see Table 5).Because Equation (35) yields at most four optimal policies (i.e., CC, CD, DC or DD), Equations ( 36) and ( 37) also yield at most four optimal policies.as shown in Table 3.Here, g 0 's asymptotic behavior about ν and µ depends on how we assume the order of errors ν and µ.If the error in the agent's action is far less than the error in the opponent's state transition (i.e., ν → 0), then we obtain g 0 → 1/2.If the error in the opponent's state transition is far less than the error in the agent's action (i.e., µ → 0), then we obtain g 0 → ν.If the two errors have the same order (i.e., µ = cν for some constant c and ν → 0), then we obtain g 0 → c/(2c + 1).For anticipation in Models 1 and 2, once the CDDD agent selects action D, the agent continues to try to select D, which is why the CDDD agent's cooperation is incurable, i.e., (39) and: Whatever the fraction terms in g 1 and g 2 are, g 1 and g 2 are O(ν) if , µ and ν are finite.For asocial-anticipation (DC or DCCC), a mechanism opposite of the above works, and the agent is mostly cooperative. Figure 2 . Figure 2. Different policies of the models.Depending on the given model, the agent's decision depends on different information as indicated by orange arrows.At the present stage in which the two players are deciding actions a and r, (a) policy π (0) depends on the opponent's present state s, (b) policy π(1) depends on the agent's previous action a and the opponent's previous action r and (c) policy π(2) depends on the agent's previous action a and the opponent's previous state s .The open solid circles represent the opponent's known states, whereas dotted circles represent the opponent's unknown states.Black arrows represent probabilistic dependencies of the opponent's decisions and state transitions. Table 2 . Available information about the opponent. Model Opponent's Model Opponent's State φ w Previous Present 1, because the opponent can mistakenly select an action opposite to what the opponent's state dictates, the agent's guess regarding the opponent's previous state could fail.This misunderstanding reduces the agent's cooperation if the opponent obeys WSLS; Table 3 . Frequencies of cooperation in infinitely repeated games.Here,
9,951.2
2016-07-28T00:00:00.000
[ "Economics", "Psychology" ]
Synthesis of a New [3-(4-Chlorophenyl)-4-oxo-1, 3-thiazolidin-5-ylidene]acetic Acid Derivative : The new methyl [3-(4-chlorophenyl)-2-{[(2,4-dichloro-1,3-thiazol-5-yl)methylidene] hydrazinylidene}-4-oxo-1,3-thiazolidin-5-ylidene]acetate was synthesized from 4-(4-chlorophenyl)-1-(2,4-dichloro-1,3-thiazol-5-yl)methylidene-3-thiosemicarbazide using dimethyl acetylenedicarboxylate as thia-Michael reaction acceptor. New compounds ( 3 and 4 ) were characterized by IR, 1 H and 13 C NMR spectroscopy methods. Introduction Toxoplasmosis is a common parasitic infectious disease that occurs all over the world. Toxoplasmosis is caused by the protozoan Toxoplasma gondii, whose ultimate host is Felidae. Approximately 30% of people have positive antibodies indicating toxoplasmosis [1]. The basic danger of the disease is the possibility of congenital infections during pregnancy and the reactivation of the disease in immunocompromised persons. In addition, the currently used drugs are not 100% effective for the treatment of toxoplasmosis, and this has prompted us to look for new synthetic compounds that could be used to combat this common parasite in the future. In our previous research [12], we identified (4-oxothiazolidin-5-yl/ylidene)acetic acid derivatives with antiparasitic activity against T. gondii ( Figure 1). The highlighted fragments (green and orange color) in Figure 1 are favorable for anti-T. gondii activity. Based on previous studies, we designed a compound which contains both highlighted fragments. 4-(4-chlorophenyl)-3-thiosemicarbazide to give the thiosemicarbazone (3). In the last step of synthesis, the targeted compound was obtained from 4-(4-chlorophenyl)-1-(2,4-dichloro-1,3-thiazol-5-yl)methylidene-3-thiosemicarbazide (3) and dimethyl acetylenedicarboxylate by thia-Michael addition of the sulfur atom to the triple bond and then cyclization to give the (4-oxothiazolidin-5-ylidene)acetic acid derivative 4 (Scheme 1), which illustrates that precursor 3 is also useful for this type of reaction, if other compounds (maleic anhydride, maleimide derivatives etc.) are used as acceptors in the thia-Michael addition. The structures of compounds 3 and 4 were supported by IR, 1 H, and 13 C NMR spectroscopy methods (see Supplementary Materials). The 1 H NMR spectra exhibit the characteristic signals for para-substituted phenyl ring as two doublets in the range 7.44 to 7.76 ppm with spin-spin coupling J = 8.7Hz. The signals derived from the proton of a CH=N group were observed at 8.29 ppm and 8.30 ppm for compounds 3 and 4, respectively. The characteristic proton signal of methylidene group (CH=) of compound 4 was observed as singlet at 6.94 ppm. All remaining signals arising from other parts of the molecule were present. Similarly, 13 C NMR confirmed present of all carbon atoms in molecule (details were presented in the experimental part). General All commercial reagents and solvents were purchased from either Alfa Aesar (Lancaster, UK) or Sigma-Aldrich (St. Louis, MO, USA) and used without further purification. The melting points were determined by using Gallenkamp MPD 350.BM 3.5 apparatus Sanyo (Moriguchi, Japan) and are uncorrected. The purity of the compound was checked by TLC on plates with silica gel Si 60F254, produced by Merck Co. (Darmstadt, Germany). The 1 H NMR and 13 C NMR spectra were recorded by a Bruker Avance 300 MHz instrument (Bruker Corporation, Billerica, MA, USA) using DMSO-d6 as solvent and TMS as an internal standard. Chemical shifts were expressed as δ (ppm). IR spectrum was recorded by Nicolet 6700 spectrometer (Thermo Scientific, Philadephia, PA, USA). Elemental analysis was performed by AMZ 851 CHX analyzer (PG, Gdańsk, Poland) and the results were within ±0.4% of the theoretical value. General All commercial reagents and solvents were purchased from either Alfa Aesar (Lancaster, UK) or Sigma-Aldrich (St. Louis, MO, USA) and used without further purification. The melting points were determined by using Gallenkamp MPD 350.BM 3.5 apparatus Sanyo (Moriguchi, Japan) and are uncorrected. The purity of the compound was checked by TLC on plates with silica gel Si 60F 254 , produced by Merck Co. (Darmstadt, Germany). The 1 H NMR and 13 C NMR spectra were recorded by a Bruker Avance 300 MHz instrument (Bruker Corporation, Billerica, MA, USA) using DMSO-d 6 as solvent and TMS as an internal standard. Chemical shifts were expressed as δ (ppm). IR spectrum was recorded by Nicolet 6700 spectrometer (Thermo Scientific, Philadephia, PA, USA). Elemental analysis was performed by AMZ 851 CHX analyzer (PG, Gdańsk, Poland) and the results were within ±0.4% of the theoretical value.
940.8
2020-07-28T00:00:00.000
[ "Chemistry" ]
A New Framework for Higher Loop Witten Diagrams The differential representation is a novel formalism for studying boundary correlators in $(d+1)$-dimensional anti-de Sitter space. In this letter, we generalize the differential representation beyond tree level using the notion of operator-valued integrals. We use the differential representation to compute three-point bubble and triangle Witten diagrams with external states of conformal dimension $\Delta=d$. We compare the former to a position space computation. I. INTRODUCTION Boundary correlation functions in anti-de Sitter (AdS) space provide an important laboratory for studying quantum field theory and quantum gravity.The AdS background regulates possible infrared (IR) divergences in perturbation theory [1][2][3], and the AdS/CFT correspondence provides a computationally tractable example of holography [4][5][6][7][8][9].This work focuses on AdS boundary correlators, for which many different computational methods have been developed .A particularly fruitful approach to searching for new methods has been generalizing established techniques for computing scattering amplitudes in flat space.For example, the OPE inversion formula [38,39] is the AdS generalization of the Froissart-Gribov formula [40]. Motivated by this approach, a new representation of AdS boundary correlators, the differential representation, has emerged.It is in a sense analogous to the momentum representation of scattering amplitudes.Momentum vectors are replaced by non-commuting conformal generators acting on a contact diagram [41].The differential representation of AdS boundary correlators was first proposed in Refs.[42,43] using the infinite tension limit of certain string theory expressions and further developed in Refs.[44][45][46]. In this letter, we generalize the differential representation of scalar AdS correlators beyond tree level by introducing the notion of operator-valued integration.We find that operator-valued integrals of scalar Witten diagrams can be interpreted as integrals over a non-commutative space.For example, operator-valued integrals obey a generalization of integration-by-parts (IBP) [47][48][49][50][51][52][53][54][55][56][57][58], which is discussed in Section V.After evaluating these operatorvalued integrals, the higher loop correlators in AdS become functions of conformal generators acting on contact diagrams.To illustrate the new methodoloy, we compute three-point bubble and triangle Witten diagrams in d = 2 and d = 2, 3, 4 dimensions respectively using the *<EMAIL_ADDRESS>representation.We compare the former to a more traditional computation performed in position space.To the author's knowledge, closed form expressions for the triangle Witten diagram in general dimension were previously unknown [59]. II. THE DIFFERENTIAL REPRESENTATION We begin with a brief review of the differential representation.We work in embedding space, R d+1,1 , where P A and X A denote boundary and bulk coordinates respectively [60][61][62][63].The boundary-to-bulk and bulk-tobulk propagators of scalars are denoted as E ∆ (P, X) and G ∆ (X, X ) respectively where the ∆ subscripts are suppressed if ∆ = d.Unless stated otherwise, the conformal dimension of all states, external and internal, is restricted to ∆ = d for simplicity.For integrals over AdS boundary or bulk coordinates, we suppress the d or d + 1 superscript in the differential.Conventions are reviewed in Appendix A. The differential representation of the n-point correlator takes the form where Ân is the differential correlator, a collection of differential operators that act on a scalar contact diagram, Scalar differential correlators, Ân , can be written solely using conformal generators, which are in the embedding space formalism.Isometry generators of bulk coordinates, denoted as D X , are the same as Eq. (3) except with the replacement of the boundary coordinate P with bulk coordinate X.While momentum space scattering amplitudes are functions on a commutative kinematic space parameterized by p µ i , differential correlators are operator-valued functions on a non-commutative kinematic space parameterized by D AB i .The AdS analog of momentum conservation is the Conformal Ward Identity (CWI). An explicit example is instructive.Consider the integrand of the four-point s-channel Witten diagram To derive the differential representation, we first use that where X = −D 2 X is the AdS Laplacian, to rewrite the position space Witten diagram as We can then use the identity, to replace D 2 X with Ultimately, one finds the differential representation of the s-channel Witten diagram is The differential representation of higher point Witten diagrams is analogous to Feynman diagrams under the replacement of propagators with the inverse differentials, 1/D 2 I .For example, the five-point Witten diagram is I and D 2 I always commute if they belong to the same Witten diagram on the support of the CWI [64].Therefore, there is never any ambiguity in the ordering of the D 2 I at tree level. III. THE DIFFERENTIAL REPRESENTATION AT ONE LOOP We now turn to the generalization of the differential representation beyond tree level.We motivate our construction using the triangle Witten diagram Given the position space representation of A 3 , we replace G(X 2 , X 3 ) with its split-representation [9,[65][66][67][68][69][70], with ∆ = d.Upon making this replacement, the triangle Witten diagram simplifies to the form where A 5 is a 5-point tree-level Witten diagram.So far, we have simply rewritten the loop diagram as a spectral integral over a tree diagram, as is standard [20].We now write the tree diagram in the differential representation where the c superscript indicates that the conformal dimensions associated with Q and Q external states in Eq. ( Combining Eqs. ( 14) and ( 15), we find This is the differential representation of the triangle oneloop Witten diagram.The above manipulations can be performed on any one-loop Witten diagram.One simply uses the split representation, Eq. ( 13), to convert the one-loop, n-point Witten diagram to a tree-level (n + 2)-point Witten diagram in the differential representation [71].For example, repeating the above manipulations for bubble and box Witten diagrams, one finds and Notably, the first lines of Eqs. ( 16)-( 18) are universal. In contrast, the second lines are unique to the Witten diagram and analogous to the corresponding Feynman diagram under the replacement of the internal loop momentum with D AB Q .We interpret the universal integrals over c, Q and Q in the first lines of Eqs. ( 16)-( 18) as the AdS analog of dl µ .We refer to such scalar integrals collectively as an operator-valued integral and formally define the operator-valued integral of an operator-valued integrand, Î(D Q , D i ), as where Again, the c superscript refers to how the conformal dimensions of the Q and Q states depend on c.Our notation is meant to suggest that we should interpret Eq. ( 19) as an integral over D Q .Using this notation, the triangle Witten diagram is and similiarly for the bubble and box differential representations.The operator-valued integrals evaluate to functions of conformal generators of external states acting on contact diagrams, C n .The operator-valued integral notation is interesting because it simplifies expressions and provides a representation of Witten diagrams analogous to Feynman diagrams.However, the utility of the operator-valued integral goes beyond aesthetics.We show in Sections IV and V that certain identities of scalar integrals generalize to operatorvalued integrals and can be leveraged to simplify the evaluation of specific Witten diagrams. IV. EXPLICIT CALCULATIONS AT THREE-POINT The differential representation is particularly useful for performing direct integration of one-loop Witten diagrams.This is most apparent at three-point where a number of simplifications occur, specifically a form of tensor-reduction.For Feynman integrals, tensor reduction implies that three-point, one-loop integrals obey the identity for any integers N, M such that M ≥ 0, N ≥ 0 and N + M > 0 [72].For three-point Witten diagrams, we conjecture an analogous identity holds if ∆ = d for all external states: with the same conditions on N and M , for all possible orderings of the differential operators in the integrand.Eq. ( 23) is much more non-trivial than its flat-space analog. Even if one assumes tensor reduction is applicable to operator-valued integrals, conformal generators can in principle be contracted using the structure constants of the AdS isometry group as well as dot products.Using formulas in Appendix B, we explicitly checked Eq. ( 23) holds for N + M ≤ 10.In Appendix C, we prove Eq. ( 23) for the special case that N = 0. Eq. ( 23) can be leveraged to dramatically simplify the calculation of certain three-point Witten diagrams.As an illustrative example, consider the three-point bubble diagram: where the conformal dimension of the state running in the loop, ∆ l , is left unfixed.We restrict this computation to d = 2 as this Witten diagram diverges for d ≥ 3. The differential representation of . Since ∆ 3 = d = 2, we find that D 2 3 = 0. Performing a Taylor Series in D Q • D 3 , one finds that all terms vanish due to Eq. ( 23) except the leading term.Therefore, the bubble Witten diagram simplifies to Substituting the definition of the operator-valued integral and using [9,73] we reduce the integral to a single contour integral which can be evaluated using the residue theorem.The final result for This result is cross-checked in Appendix D, where we evaluate the bubble Witten diagram in position space and find the answers agree.We can use the differential representation to evaluate more complex Witten diagrams, such as the triangle Witten diagram.We fix the conformal dimension of states running in the loop to ∆ l = d for simplicity.The relevant operator-valued integral is then Eq. ( 21).We again take a Taylor series of the operator-valued integrand, except now in D Q • D 1 and D Q • D 2 .All terms vanish except the leading term due to Eq. ( 23).The final result can be converted into a single scalar integral, which can again be evaluated using residue theorem.Evaluating the integral, we found and that the integral is divergent for d ≥ 5, similar to flat space.Evaluating the c-integral for odd d is slightly harder than even d because an infinite number of residues contribute that need to be re-summed. V. GENERALIZED IBP RELATIONS In flat space, IBP is an important tool for computing Feynman integrals [47][48][49][50][51][52][53][54][55][56][57][58].We now give a partial generalization of IBP for operator-valued integrals.We first note that the operator valued integral should be invariant under arbitrary conformal transformations of Q and Q , which implies where v is a tensor, is independent of v.We now rewrite the above operator-valued integral as where Î is Î with the replacement for all a ∈ {Q, 1, . . ., n}.If v is a constant tensor, then the above shift only acts non-trivially on D Q and dependence on D Q disappears.Let us now take v to be an infinitesimal in Eq. (31).Since the result is independent of v, the component linear in v must vanish, which imposes non-trivial linear relations among operator-valued integrals.The collection of identities derivable from this procedure does not necessarily span the space of all linear identities obeyed by operator-valued integrals, but is enough to illustrate that there are non-trivial relations which mimic their flat-space counter-parts. For example, we can apply the above procedure to the triangle Witten diagram.We assume v is an infinitesimal constant, so the replacement rule simplifies to where f AB CD,EF is a structure constant of the AdS isometry group.The above procedure ultimately implies the operator-valued integrand integrates to zero for external states with arbitrary conformal dimension.Unlike the operator-valued integrands previously considered, the differential operators in each term do not always commute and there are contractions of conformal generators with structure constants.Furthermore, the constant tensor v explicitly breaks conformal symmetry, so the CWI must be applied with care [74]. VI. OUTLOOK The differential representation is a powerful framework for evaluating Witten diagrams, as illustrated by the direct evaluation of the triangle Witten diagram.Beyond three-point, the differential representation implies linear relations among certain operator-valued integrands, which are the AdS generalization of IBP relations.In general, the similarities between Witten diagrams in the differential representation and Feynman diagrams imply that many techniques for evaluating Feynman diagrams should generalize to the differential representation.Beyond AdS, the differential representation can also be used to evaluate Witten diagrams in de Sitter space [45,[75][76][77][78][79][80]. We can invert Eq. (A16) to find a representation of Ω c (X 1 , X 2 ) in terms of bulk-to-bulk propagators, A particularly useful representation of G ∆ (X 1 , X 2 ) comes from substituting the split representation of the harmonic function, into Eq.(A16): Further identities are provided in the main text as needed. Appendix B: Useful Integral Identities In this appendix, we discuss how we checked that Eq. ( 23) holds.Crucially, first note that f (D 19) and is therefore independent of Q.However, non-trivial dependence on Q emerges upon acting ( We explicitly checked that the resulting expression vanishes upon integrating over Q for N + M ≤ 10. We review the computation strategy to integrate over Q.We find that the integrand contains terms whose Q-dependence takes the generic form We first consider the simplest specialization of Eq. (B1): Using the identity we can rewrite Eq. (B2) as Finally, using the identity which simplifies to This computation strategy generalizes to all integrals of the form Eq. (B1).Writing the result in a tensor version of Eq. (B1), the integral yields where traces are subtracted using η AB .Eq. (B8) was originally given in Ref. [73] by taking derivatives of the integral in the bulk coordinate X A .We have reproduced this formula here by direct integration to avoid subtleties that are relevant when taking derivatives in bulk or boundary coordinates in embedding space [81]. which provides a recursion relation for the a n,k coefficients, where We now take the expression in Eq. (C6) and integrate over Q using Eq.(B7).We find the result To show this expression is zero, substitute the identity in Eq. (C4) for all a n,k .There are now two sums over g k × (. ..) and f k−1 × (. ..) respectively.Substituting in the definitions of g k and f k in Eq. (C5), these two sums cancel.Therefore, the expression in Eq. (C6) vanishes.Unfortunately, proving Eq. ( 23) for non-zero N and M is much more difficult than the N = 0 case.We will sketch a proof strategy here.Similar to the N = 0 case, one would first establish an ansatz for (D .. as a sum of terms of the form One would then establish a recursion relation among coefficients similar to Eq. (C3) and perform an integral over Q using Eq.(B8).Unlike the N = 0 case, one would also need to subsequently integrate over the bulk coordinate X using the closed form expression of the 3point D-function.After integrating over Q and X, the hope is that the recursion relations between coefficients would be enough to show that the terms in the sum cancel among themselves, similar to what happens in the N = 0 case. Appendix D: Explicit Comparison for Bubble Diagram in AdS3 In this appendix, we evaluate the three-point bubble diagram in Eq. ( 25) in position space as a cross-check of our result in Section IV.To simplify the computation, we consider the more general case that P A 3 is in the bulk and then take the limit that P A 3 approaches the boundary, writing AdS dX 1 dX 2 E(P 1 , X 1 ) × E(P 2 , X 1 )(G ∆ l (X 1 , X 2 )) 2 G(X 2 , P 3 ) . (D1) We consider the split-representation of the d = 2 bulkto-bulk propagator, given in Eq. (A16), and the bubble, where B(c) was derived in Ref. [24], This integral can be evaluated using the residue theorem, but the contour is different for each term due to distinct behavior at |z| → ∞.The G 1±c (X 1 , P 3 ) term corresponds to a contour which includes the residue at c = ±1.The final result is As expected, we find that the operator-valued integration result in Eq. ( 28) matches the result derived from direct integration in position space in Eq. (D6).Given that A Bubble 3 is a one-loop diagram in AdS 3 , it was surprisingly straightforward to evaluate.The key to the above computation was using the split representation of the bubble diagram in Eqs.(D2) and (D3).Unfortunately, this computation strategy does not generalize to more complicated one-loop Witten diagrams, such as A 3 . D 2 I and D 2 I commute if I ⊆ I , I ⊆ I or I ∩ I = ∅, D 2
3,782.8
2021-12-15T00:00:00.000
[ "Physics" ]
Performance Prediction of Microwave Absorbers Based on POMA / Carbon Black Composites in the Frequency Range of 8 . 2 to 20 GHz This paper presents a comparative study involving experimental and numerical behaviors of radar absorbing materials (RAM), based on conducting composites of poly(o-methoxyaniline) (POMA) and carbon black (CB). Samples of POMA/CB in epoxy resin matrix were prepared. First, these samples were experimentally characterized by electric permittivity and magnetic permeability measurements in the frequency range of 8.2 to 12.4 GHz. Afterwards, a linear extrapolation of these electromagnetic parameters until 20 GHz was carried out. These amounts were used as parameters for a set of simulations, developed from numerical implementation of theoretical predictions. The main advantage of the performed simulations is to know the behavior of the POMA/CB/epoxy resin as RAM in a wide range of frequencies (8.2-20 GHz), previously to the experimental work. The validation of the simulations with experimental refl ection loss measurements showed a good fit and allowed predicting the material behavior as RAM. The results show that the studied RAM presents good return loss values in different frequencies, for example, –32 dB (~99.95% of absorption) at 14.6 GHz and –18 dB at 19.2 GHz, for samples with 7 and 9 mm-thickness values, respectively. The simulation tool used in this study was adequate to optimize the RAM production, contributing to the reduction of development costs and processing time of this kind of material. INTRODUCTION Associated with the increasing use of electromagnetic waves in the GHz range in equipment and devices used in telecommunication, military and medical areas, there is a need to monitor the effects of electromagnetic interference produced by the radiation generated (Feng et al. 2007;Folgueras et al. 2010).In seeking to eliminate or control these effects we observe the growing number of studies involving the use of so-called microwave absorbing materials or radar absorbing materials (RAM) (Gama et al. 2011;Dias et al. 2012) . RAM are so named because they have properties that allow them to exchange the energy of the incident electromagnetic radiation by thermal energy (Dias et al. 2012;Wang et al. 2017).For this phenomenon of energy exchange to take place, is required to set the proper values of impedances of such materials, in order to favor the propagation of the incident wave inside thereof, and not its reflection.Knowing that satisfies this condition promotes partial or almost total electromagnetic wave attenuation by physical and/or physico-chemical mechanisms (Feng et al. 2007).In pursuit of this condition, the impedance values of the absorbing materials, mainly at the interfaces, should be adjusted to approach the maximum xx/xx 02/09 values of the impedance of free space (377 Ω) (Micheli et al. 2014).For this, the electric permittivity (ε), the magnetic permeability (µ) parameters and the thickness of a material are fundamental physical characteristics, determining the resulting attenuation of the electromagnetic wave incident on RAM (Gama et al. 2011). The absorbing materials are divided into two major groups, namely, one that has dielectric losses and the one characterized by magnetic losses (Oh et al. 2004).Among the additives commonly used in dielectric absorbers processing, we have carbon black and intrinsically conductive polymers.In the case of magnetic absorbers, we have any ferrite and carbonyl iron (Dias et al. 2012).Among the conductive polymers studied as microwave absorbers, we have polyaniline and its derivatives, such as poly(o-methoxyaniline) (POMA) (Sanches et al. 2014). It is worth mentioning that the impedance setting of a RAM involves much experimental work, in search of the best formulation of the components involved in this processing.In this case, it is necessary to define the additives which act as microwave absorbers and the polymer matrix used to both anchor the additives and confer the final shape of RAM (Balanis 2012).Moreover, this area still requires a thorough and complex experimental work on the electromagnetic characterization of the developed RAM.It is also known that the performance of these materials is also related to the frequency of the incident electromagnetic wave and the thickness of the processed materials, as shown in Eq. 1 (Balanis 2012): (1) where: Z is the normalized input impedance of the material; t is the sample thickness; m is the magnetic permeability; e is the electric permittivity of the material, and λ is the wavelength of the incident plane wave in free space.The complex parameters ε and µ are expressed by Eqs. 2 and 3, respectively, considering the real and imaginary components of the permittivity (ε', ε'' , respectively) and the permeability (μ' and μ'', respectively).ε = ε´ -jε´´ , µ = µ´ -jµ´É quation 4 gives the intrinsic impedance of the material, considering the normal incidence of the electromagnetic wave on the same, in a waveguide.By means of the normalized impedance, it can obtain the reflection loss (RL) and the impedance matching of the material positioned on a metal plate, Eq. 4 (Gama et al. 2011): When the appropriate setting of the complex parameters of permeability and permittivity is reached, the RL is maximal for a given frequency and a given material thickness (Vinayasree et al. 2013). Th e literature has shown some contributions in the performance prediction of RAM in wider ranges of frequencies and in diff erent thicknesses supported by computational tools (Gama et al. 2011).Th is procedure is oft en based on values of permittivity and permeability extrapolated from experimental data collected in narrower frequency bands.Th ese studies aim to reduce costs, processing time and the risk involved in the establishment of formulations attuned to the frequency range to be attenuated.However, it is important to mention that this procedure is also the subject of discussion among researchers, due to the complexity involved in the experimental process and the used parameters in the extrapolation of the complex parameters of permittivity and permeability for a wide range of frequencies (Dantas et al. 2015). xx/xx 03/09 It is known that the complex parameters of permittivity and permeability exhibit a monotonic decrease with increasing frequency, approaching a linear behavior (Kao 2004).However, this assumption may not be as well behaved and the fit of the actual behavior of the material can deviate.Added to this, there are other factors that can contribute to other deviations of RAM behavior prediction, as the consideration of the material under study is homogeneous and that the calibration of equipment that measures the complex parameters (vector network analyzer), the adjustment of the waveguide flanges and the positioning of the sample in the waveguide are error-free.Thus, work prediction should always be considered with reservations and, whenever possible, be validated with experimental data, similarly to what occurs in other fields that use simulation and prediction. In this sense, this study aims to present results of numerical predictions of the microwave attenuation performance of a composite based on POMA/carbon black in epoxy resin, in the frequency range of 8.2 to 20 GHz, correlating the behaviors numerically calculated with those experimentally determined. MATERIALS AND METHODS The materials used in this study were o-anisidine from Aldrich, with 99% of purity, ammonium persulfate ((NH 4 ) 2 S 2 O 8 ) from Merck as oxidant, with 98% purity, 1.0 mol.L -1 HCl solution from Merck, with 37% minimum content of acid, carbon black type XC72R from Cabot company and a commercial bicomponent epoxy resin, type Araldite, attending the proportion 2:1 of resin:hardener (wt/wt). The POMA used in this study was synthesized based on the work of Mattoso and Bulhões (1992) and presented in detail in a previous study (Pinto and Rezende 2012).This synthesis was performed in the presence of carbon black, based on the work of Wu et al. (2008), resulting in a composite named POMA/CB.For this, 10 wt% of carbon black was added to 200 mL of 1.0 mol.L -1 HCl solution containing 6.2 mL of o-anisidine freshly distilled and stabilized at 0°C.This mixture was left under agitation.Then, in a beaker were dissolved 2.88 g of ammonium persulfate in 50 mL of 1.0 mol.L -1 HCl solution.The reaction medium containing the o-methoxyaniline was maintained at 0 o C during the dripping of the acid solution of ammonium persulfate.The total time of synthesis was 120 min (Mattoso and Bulhões 1992).Afterwards, the POMA/CB was washed and dried under vacuum. The samples were processed in an epoxy resin matrix in the proportion of 20 wt% of the POMA/CB composite, respectively, through mechanical mixing of the components.This mixture was poured into a mold of dimensions of 23 × 10 × 9.0 mm that corresponds to the exact dimensions of the sample holder used in the electromagnetic characterization.Epoxy resin curing was carried out at room temperature for a period of 24 h. The real and imaginary values of ε and µ of the studied sample were obtained, in triplicate, based on ASTM D5568-01.Using a vector network analyzer from Agilent Technologies, model PNA-L N5230C, with four ports, a frequency generator between 300 kHz and 20 GHz, low loss cables, connectors and a high-precision rectangular waveguide adapter, being the adapter also from Agilent Technologies, model 00281-60016 OPTION 006. Figure 1 shows the apparatus and devices used in this work.The calculation of complex parameters of ε and µ was carried out with the aid of the software 85017E from Agilent, based on the Nicolson-Ross model (Nicolson and Ross 1970). The method used for the extrapolation of the complex permittivity and permeability parameters in larger frequencies is based on a truncated Kramers-Kronig relation, based on finite frequency data (Dantas et al. 2015).Considering a few assumptions, such as the behavior of the loss tangent and the overall nature of corrections, the used method is robust within a few percentage of relative error, if the assumed hypotheses hold at the extrapolated frequency range.This method is described in the literature (Dantas et al. 2015). A Java application was used to facilitate the mapping of the electromagnetic parameters of interest for the radar absorbing materials design.The computational tool developed, named "RFE" (an acronym in Portuguese for Reflectivity, Frequency and Thickness), implements directly Eqs.1-4.In terms of computational time for the numerical solution of these equations, the application shows a very good performance, which in practical terms allows exploration in "real time" of several thickness settings dynamically.(2) sample holder; (3) waveguide section of port 2; (4) sample. (c) Th e RFE application outputs the attenuation as a function of the constitutive properties of the material and the incoming wave frequency.Specifi cally, the application receives as static input parameters the real and imaginary components of the electric permittivity (ε' and ε'', respectively) and the magnetic permeability (μ' and μ'', respectively), both as a function of frequency, and gives as output the refl ectivity in dB, as a function of the range of frequencies of interest, according to the material thickness (in mm).Graphical user interface (GUI) windows were implemented in the code.Th e output plot of the attenuation behavior as a function of frequency is updated in "real time" from the thickness variation, which can be freely adjusted by the user.Th is feature facilitates a dynamic analysis of the attenuation behavior, allowing the user to compose a set of diff erent scenarios for the RAM behavior. ELECTROMAGNETIC CHARACTERIZATION Obtaining the complex components of the electric permittivity and magnetic permeability assists in understanding the phenomena of absorption of electromagnetic radiation by RAM, as mentioned in the literature (Singh et al. 1999).Figure 1 shows the values of real and imaginary components of the permittivity and permeability in X-band, for the POMA/CB composite (20 wt%) in epoxy resin.First, it is observed that the values of the complex electric permittivity are greater than the magnetic permeability; for example, at 10.2 GHz, the parameters ε´, ε´´, µ´ and µ´´ are equal to 4.4911, 0.3535, 1.0332 and 0.0791, respectively (Table 1).Th is behavior is expected, knowing that the absorbing material under consideration is the dielectric type, since the magnetic parameters are typical of this class of materials (µ´ next to 1 and µ´´ next to zero) (Vinoy and Jha 1996).Th us, the electric permittivity presents more signifi cant values, with the storage component (ε´) higher. Figure 2 also shows that the four complex parameters experimentally measured are practically constant in the frequency range of 8.2 to 12.4 GHz.Th e observed behavior for the complex parameters is mentioned in the literature (Singh et al. 1999) and it is attributed to the fact that these quantities are less infl uenced in this frequency band (GHz).Th e opposite behavior is observed at lower frequencies (Hz e MHz), where the variations are more pronounced and signifi cant (Hong et al. 2015).Since the values of the four components (ε', μ', ε'' and μ'') are presented on the same scale (Fig. 1 a,b), the diff erences become less obvious, especially for the imaginary components (ε'' and μ'').Th ese values are proportionally smaller than those presented by the real components (ε' and μ').Table 1 shows some values obtained experimentally for diff erent frequencies, among the points collected for each of the real and imaginary components of ε and µ, in the frequency range of 8.2 to 12.4 GHz. Figure 3 shows the extrapolated values of real and imaginary components of permittivity and permeability from 12.4 GHz to 20 GHz.Th e analysis of this fi gure and according to Table 1 it can be seen that the real and imaginary values of permittivity suff er a decrease, in accordance to the literature (Hong et al. 2015), while the real and imaginary components of the magnetic permeability keeps nearly constant with a slight increase in higher frequencies.Overall, Table 1 shows that the real components ε´ and µ´ show less variation between the measured values.The observe d behavior shown by ε´, that is, its decrease with increasing frequency, is expected and described in the literature (Singh et al. 1999).On the other hand, it is observed a slight increase of µ´ with the frequency increase.The literature has shown the variation of the magnetic and electrical properties of conducting polymers as a dependence of the conditions of synthesis and polymer doping (Sanches et al. 2013).The data obtained in this study show that these properties present a slight variation with the frequency increasing.However, the imaginary components ε´´ and µ´´ are smaller, although with the most significant variations mainly in ε´´.In the case of ε'' , it is observed a slight increase of this parameter.This behavior is described in the literature (Kao 2004) and it is attributed to losses due to dipole oscillation of the molecule in this frequency range.In a similar way, it is verified a slight increase of µ´´ with the frequency increasing, suggesting that the magnetic property of conducting polymers changes with the frequency increasing in the GHz range. Th e four components (ε´, ε´´, µ´ and µ´´) in the frequency range of 8.2 to 12.4 GHz (experimental data) and above 12.4 to 20 GHz (extrapolated data) were used in the performed simulations.Regarding that, complex parameters above 12.4 GHz until 20 GHz were extrapolated from experimental data (8.2-12.4GHz), as previously described.Th is is the boundary condition adopted for this study.xx/xx 06/09 Figure 4 shows an experimental refl ectivity curve for the sample containing 20 wt% of POMA/CB in epoxy resin, with 9.0 mm-thickness, in the frequency range of 8.2 to 12.4 GHz and others resulted from simulations using the RFE algorithm. Th e refl ectivity curve obtained experimentally (Fig. 4) shows that the RAM sample with 9.0 mm-thickness behaves as an effi cient microwave absorber, with a maximum RL value of -24 dB (> 99% of absorption according to Lee, 1991) at 11.6 GHz.Th e comparison of this curve with that one simulated for a sample with the same thickness (Fig. 4) shows that the two curves present a good fi t of experimental and simulated data.Th e small diff erences observed between these two curves are attributed to any error of the complex parameters experimentally measured in the X band, any possible irregularities in thickness and/or fl atness of the sample, as cited in the literature (Gama et al. 2011). Despite the small diff erences observed between the experimental and simulated curves (for the 9.0 mm-thickness specimen), it is possible to affi rm that the tool used is robust and useful to predict the material behavior as a microwave absorber, given information related to the type of absorbing material (resonant or broadband), maximum frequency attenuation and its attenuation effi ciency, previously to laboratory work.Th us, from the comparison of experimental and simulated curves in the Extrapolated values of ε´, ε´´, µ´ and µ´´ in the frequency range of 12.4 to 20 GHz were obtained.From these data and those obtained experimentally (8.2-12.4GHz), simulations were performed for the samples with thicknesses of 1, 3, 5, 7 and 9 mm, in the frequency range of 8.2 to 20 GHz (Fig. 4). Table 2 summarizes the maximum RL values in dB and their frequencies of occurrence.From Table 2 and Fig. 4 we observe that a same RAM sample can present the maximum attenuation in different frequencies, with the thickness varying, as described in Eq. 1. Similar behavior is reported in the literature for magnetic absorbers based on carbonyl iron (Gama et al. 2011, Singh et al. 1999).Figure 4 and Table 2 show significant attenuation values (at least -10 dB, corresponding to 90% of attenuation, according to Lee 1991) over the frequency range of 8.2 to 20 GHz, for the sample thicknesses of 5.0, 7.0 and 9.0 mm.We can see, for example, RL values of -32 dB (> 99.9% of attenuation) at 14.7 GHz for the sample with 7.0 mm and -14 dB for the sample with 5.0 mm thicknesses at 20.0 GHz, respectively.Figure 4 also shows that the 9.0 mm thickness sample presents two peaks of resonance at 11.6 GHz and 19.2 GHz, respectively, with the periodicity of 7.6 GHz between these two maximum of attenuation.This phenomenon is attributed to the wave phase cancelling, associated with multiple wavelengths, which in turn is related to the physical thickness of the sample (Balanis 2012). The results obtained in this work show that the sample of POMA/CB (20 wt%) in epoxy resin behaves as a microwave absorber in the frequency range of 8.2 to 20 GHz.The results obtained in the simulations support new experimental works of processing of microwave absorbers tuned in preselected frequencies. CONCLUSION In this study, we used an algorithm to simulate the behavior of microwave absorber based on POMA/carbon black in large frequency range (8.2-20 GHz) and with different thicknesses.For this, it was assumed the linearity of the experimental data of ε and µ in the frequency range of 8.2 to 12.4 GHz to obtain the extrapolated values in the frequency range of 12.4 to 20 GHz, which, in turn, were used in the predictions.The simulation validation was carried out with experimental measurements of reflection loss in the frequency range of 8.2 to 12.4 GHz and the results show a good convergence and fit between experimental and simulated RL curves.Thus, the use of this tool proved to be useful to predict the microwave absorber behavior with different thicknesses and in wide frequency range, minimizing costs and time devoted to process absorbers tuned to the selected frequencies.The results of this study also show that the composite based on POMA/carbon black presents an excellent potential as RAM in the frequency range of 8.2 to 20 GHz.Results of -32 dB at 14.6 GHz and -24 dB at 11.6 GHz are observed in the performed simulations, considering samples with 7.0 and 9.0 mm-thickness values, respectively. shows the apparatus and devices used in this work.The calculation of complex parameters of ε and µ was carried out with the aid of the software 85017E from Agilent, based on the Nicolson-Ross model (Nicolson and Ross 1970). on the Nicolson-Ross model (Nicolson and Ross 1970). figure 4 . figure 4. Experimental (in the frequency range of 8.2 to 12.4 GHz) and simulated RL curves for the sample of POMA/CB (20 wt%)/epoxy resin, with 1, 3, 5, 7 and 9 mm of thickness, in the frequency range of 8.2 to 20 GHz. 12.4 GHz), it is possible to affirm that the algorithm used supports credible studies involving RAM prediction in other frequency bands and with different thicknesses, saving time and financial resources. Table 1 . Experimental and extrapolated values of real (ε´ and µ´) and imaginary (ε´´ and µ´´) components of POMA/CB (20 wt%) in epoxy resin in different frequencies. Table 2 . Maximum attenuation and their frequencies of occurrence obtained from simulated RL curves for the sample of POMA/CB (20 wt%) in epoxy resin.
4,754.4
2018-03-29T00:00:00.000
[ "Materials Science" ]
CMS ECAL Calibration Strategy The CMS Electromagnetic crystal Calorimeter (ECAL) must be precisely calibrated if its full potential performance is to be realized. Inter-calibration from laboratory measurements and cosmic ray muons will be available for all crystals and has been demonstrated to give good pre-calibration values at the start-up; some crystals will be also inter-calibrated using an electron beam. In-situ calibration with physics events will be the main tool to reduce the constant term of the energy resolution to the design goal of 0.5%. In the following the calibration strategy will be described in detail. Presented at CALOR 2006, Chicago, USA, June 5-9, 2006 Submitted to American Institute of Physics (AIP) The ECAL Detector The ECAL [1] is made out of 75848 lead tungstate (PbWO 4 ) crystals (Fig. 1a).They are arranged into a Barrel (61200 crystals), covering the central rapidity region (|η|< 1.5) and two Endcaps (7324 crystals each) which extend the coverage (up to |η|< 3.0).Due to the high density (8.28 g/cm 3 ) and the small radiation length (X 0 = 0.89 cm) of PbWO 4 , the calorimeter is very compact and can be placed inside the magnetic coil [2] needed for precise momentum measurements with the Tracker [3] and the Muon [4] systems.The small value of the Molière radius (2.2 cm) matches the very fine granularity needed by the high particle density of the events at LHC. Crystals are organized in a quasi-projective geometry so that their axes make a 3 ο angle with respect to the vector from the nominal proton-proton interaction vertex, in both the azimuthal and polar angle projections.This slightly off-pointing geometry improves the hermeticity of the detector. The electrons lose energy via bremsstrahlung in the Tracker material (Fig. 1b) while the photons are converted into electron pairs.In addition, the strong magnetic field bends the electrons causing the radiated energy to spread in φ.Both effects impact electron/photon energy resolution making the in-situ calibration of ECAL a challenge.Special reconstruction algorithms [5] have been developed to recover the radiated energy from electrons and to reconstruct the converted photons. ECAL Calibration The physics reach of the ECAL, in particular the discovery potential for a low mass Standard Model Higgs boson in the two photon decay channel [6], depends on its excellent energy resolution.The intrinsic ECAL energy resolution measured in Testbeam (by summing the deposited energy in the 3×3 array of crystals around the crystal in beam) is expressed by the following parameterization (E is in GeV): ) (1) which matches the design resolution for a perfectly calibrated detector.Mis-calibration will directly affect the constant term, degrading the overall ECAL performance. The goal of the calibration strategy is to achieve the most accurate energy measurement for electrons and photons.Schematically the reconstructed energy can be decomposed into three factors [5]: (2) where G is a global absolute scale and F accounts for energy losses due to bremsstrahlung and containment variations.The c i factors are the inter-calibration coefficients while the A i are the signal amplitudes, in ADC counts, which are summed over the clustered crystals.The main source of channel-to-channel response variation in the Barrel is the crystal-to-crystal variation of scintillation light yield which has an r.m.s. of ~15%.In the Endcaps, the crystal signal yield and the product of the gain, quantum efficiency and the photocathode area of the VPTs, have an r.m.s.variation of almost 25%. The target inter-calibration precision can only be achieved using physics events.Over the period of time in which the physics events used to provide an inter-calibration are taken, the calorimeter response ideally should remain stable and constant to high precision.One source of significant variation is changes in crystal transparency caused by irradiation and subsequent annealing.The changes are tracked and corrected using a laser monitoring system [7].In addition, the sensitivity of both crystal and photo-detector response to temperature fluctuations requires a precise control of the temperature stability.The water-cooling system guarantees a long-term temperature stability of the crystal volume and the APDs below the 0.1 o C level [7], in order to be able to meet the target values for the energy resolution. Calibration Roadmap There are two distinct periods during which calibration can be performed.The first period is before the installation of the detector.During that period ECAL crystals can be calibrated in Testbeam (around 10k crystals only, due to the restricted beam time), using light yield measurements in the laboratories that check the crystals quality and by using cosmic muons.When the detector will be fully operational, minimum bias events and/or Level-1 jet triggers could be used in order to achieve a fast crystal inter-calibration.Precise in-situ intercalibration can be achieved with isolated electrons from sources like W→eν or Z→e + e - decays.The absolute calibration scale as well as other calibration tasks can be achieved by using the mass constraint for electrons from Z→e + e -decays.There is also the possibility to achieve a fast and accurate calibration using π 0 ,η→γγ or Z→µµγ decays. Calibration in Testbeam In the Testbeam, supermodules [1] are mounted on a rotating table that allows rotation in both the η and φ coordinates and can be fully scanned with high-energy electron beams.The incident electron positions are measured with a set of hodoscopes.The response of a single channel to electrons depends on the electron incident position.The dependence on the two lateral coordinates can be factorized and corrected.The inter-calibration coefficients c i for crystal i are defined as the ratio of the mean value of the corrected response to a reference value. The statistical uncertainty remains negligible (less than 0.1%) provided that at least 1000 events are taken per crystal.The inter-calibration precision, when these constants are used insitu, is expected to be limited by variations occurring in the time between their determination in the Testbeam and their utilization in the installed detector. Calibration from Laboratory Measurements The crystal calorimeter is being assembled in 2 regional centers: at CERN and at INFN-ENEA Casaccia near Rome.During the assembly phase, all the detector components are characterized and the data are saved in the construction database.From these data it is possible to estimate the inter-calibration coefficients c i of each channel i as [7]: where LY is the Light Yield of the crystals, M and ε Q are respectively the gain and quantum efficiency of the photo-detectors and c ele is the calibration of the electronics chain.The crystal LY is measured in the laboratory with a photo-multiplier tube, exciting the crystal with a 60 Co source that emits photons with energy of 1.2 MeV.This gives an average LY PMT for the PbWO 4 crystals of 10 pe/MeV at 18 o C. The measurements span about 7 years of crystal production.The stability of the LY bench calibration is crucial and is constantly controlled using reference crystals. In order to establish the inter-calibration precision achieved with the laboratory measurements, the inter-calibration coefficients are compared with those from Testbeam measurements (Fig. 2a).As can be seen from their ratio (Fig. 2b) an inter-calibration precision of about 4.2% can be obtained from laboratory measurements. Calibration with Cosmic Ray Muons Inter-calibration coefficients for Barrel crystals are also obtained using cosmic muons that are well aligned with the crystal axes [8].For this measurement the APD gain is increased by a factor 4 with respect to the gain to be used during normal data taking by increasing the bias voltage.This improves the signal to noise ratio and allows the selection of muons passing through the full length of crystals by vetoing on signals in surrounding crystals.An overall precision of 3.0-3.5% should be achievable in one week of data taking.The statistical contribution to the overall uncertainty was estimated to be around 2%. Calibration with the φ -uniformity Method The proposed technique makes use of the φ-uniformity of deposited energy to intercalibrate crystals within rings at constant η.Due to the symmetry of the ECAL about η=0, crystals with the same |η| are folded.In the Barrel, there are 85 pairs of rings with 360 crystals per ring.In the Endcaps, there are 39 pairs of rings and the number of crystals per ring varies with η. Inter-calibration is performed by comparing the total energy deposited in each crystal with the mean of the distribution of total energies for all crystals in a ring.For the moment, two choices of event trigger have been investigated: random bunch crossings [9], and Level-1 jet triggers [10].The inter-calibration precision for a given |η| is obtained from the Gaussian width of the distribution of ΣE T for the pair of rings of crystals at that value of |η|. A limit on the precision arises due to non-uniformities in φ, primarily from the inhomogeneity of Tracker material, but also from geometrical asymmetries such as the varying off-pointing angle of Endcap crystals, and the boundaries between Barrel supermodules. Results based on a Level-1 jet-triggers simulated sample are shown in Fig. 3a for the Barrel and in Fig. 3b for the Endcaps.It can be seen that without using any knowledge about the material distribution in the Tracker, the limit on the precision is close to 1.5% throughout the Barrel and between 3.0% and 1.0% for the fiducial region of the Endcaps.It is expected that the limit on the precision will be closely approached with a few tens of millions of events.This is equivalent to about 10 hours of data taking, under the assumption that 1 kHz of Level-1 bandwidth is allocated to single jet triggers, and that the calibration software has access to this rate, either running on the Filter Farm, or more probably, running offline on a highly compacted data stream (a few tens of channels stored per event). Calibration with Z→e + e - The Z mass constraint in Z→e + e -decays is a powerful tool for calibration.A number of different uses are envisaged, from tuning the corrections of the electron energy reconstruction algorithms as shown in [11], to the inter-calibration of regions of the ECAL, for example as a complement to the φ-uniformity method at the start-up. For a preliminary estimate of the inter-calibration factors between rings, electrons that radiated little were chosen since their reconstructed energy shows the least dependence on the Tracker material, and hence η.The method has been tested taking the calorimeter regions as rings of crystals (at fixed η) in the ECAL Barrel.The results obtained when starting from a 5% mis-calibration between rings and a 2% mis-calibration between crystals within a ring, using events corresponding to an integrated luminosity of 2.0 fb -1 , corresponds to 0.6% ring inter-calibration precision. Calibration with Isolated Electrons Once the Tracker is fully operational and well aligned, inter-calibration of crystals can be performed using the momentum measurement of isolated electrons [12].The main difficulty in using electrons for inter-calibration is that electrons radiate in the Tracker material in front of the ECAL, and both the energy and the momentum measurement (P) are affected.Moreover the average amount of bremsstrahlung varies with Tracker material thickness.The ECAL energy will be measured by summing the energy deposited in the 5×5 array of crystals (S25) around the crystal with the maximum signal.The energy in the 5×5 array does not require the complexity of a single crystal containment correction and helps to cleanly separate the inter-calibration from the corrections required by the super-clustering algorithms.In the Endcap, the energy measured in the Preshower and associated with the electron cluster is added to the energy summed in the crystals. In order to extract the inter-calibration constants the individual crystal contributions must be unfolded, while minimizing the difference between the energy and momentum measurements.Two algorithms to achieve this minimization have been tested: an iterative technique that was used for the in-situ calibration of the BGO crystals in the L3/LEP experiment and a matrix inversion algorithm.The results, both in terms of precision and in terms of speed of algorithm, are similar, and show no dependence on the technique used.The event selection was based on variables that are sensitive to the amount of bremsstrahlung emission, chosen to select events with little bremsstrahlung. Due to the variation of the average value of S25/P with pseudorapidity, caused by the variation of the amount of material in front of the ECAL, the calibration task will be divided into two steps.In the first step crystals in small regions in η, over which the average value of the S25/P is rather constant, will be inter-calibrated.In the second step the small regions will be inter-calibrated with each other. The calibration precision versus η achievable for a fixed integrated luminosity (Fig. 4a) follows the Tracker material budget distribution.The simulated data used to obtain these results correspond to about 5(7) fb -1 in the Barrel(Endcaps).This estimation uses the PYTHIA cross section for the W-production with no k-factor.The calibration precision was also extensively studied in different φ-regions keeping the same η interval.There is no evidence of any φ-dependence. The calibration precision achievable is strongly dependent on the number of electrons collected per crystal (HLT output).In Fig. 4b the inter-calibration precision versus the number of electrons per crystal is shown for three different areas of the ECAL Barrel.The curves, from bottom to top, represent the accuracy for low, middle and high η regions in the ECAL Barrel.As can be seen, an inter-calibration precision of 0.6% averaged over the Barrel can be achieved with 10 fb -1 of integrated luminosity.After crystals within regions of η are inter-calibrated, the regions have to be calibrated among themselves.This task is accomplished selecting electrons with minimum energy loss due to bremsstrahlung.After this selection, the resulting values of the peaks of the S25/P distributions that are found are consistent with the expected pseudorapidity dependence of shower containment. The rate of the jet background in the single electron trigger stream (HLT output) is estimated to be 27Hz at low luminosity out of which 16Hz are expected in the barrel.The residual background has been investigated for the Barrel case.After the calibration selection is applied, the surviving background corresponds to a rate of 2.3 Hz.One third of this rate comes from b/c→e semileptonic decays.Such decays might be useful in the calibration process, increasing the overall calibration statistics.If required, the background can easily be further reduced by a factor 10, using isolation cuts, with only a small effect on the signal efficiency. Calibration with π 0 ,η→γγ and Z→µµγ The possibility of inter-calibrating the ECAL using the reconstructed mass of π 0 ,η→γγ is being investigated.These low mass particles could provide an important additional calibration tool which is useful for relatively rapid inter-calibration of all crystals, study of the effects of crystal transparency corrections from the laser monitor and rapid check-out and monitoring of detector performance. The inter-calibration obtained from low-energy π 0 →γγ is not sensitive to Tracker material if unconverted photons are selected.The only effect of the Tracker material is a rate loss at larger η values due to photon conversions. It has been shown that π 0 s, useful for calibration, can be located within events using the ECAL Level-1 trigger information, requiring very little processing time to extract the small amount of information relevant for calibration.In the ECAL Barrel, the π 0 mass peak, with relatively little background, has a mass resolution of about 8%.Around 1.4% of the Level-1 trigger events have a usable π 0 in the Barrel and almost all of them are tagged by the isolated electron Level-1 trigger.With an assumed Level-1 global trigger rate of 25 kHz, about 100 π 0 s per crystal can be obtained in a running period of less than 5 hours. Events from η→γγ are also being studied.The signal has a much lower rate once the background is reduced sufficiently, but the mass resolution is about 3%.The η→γγ decay should be a useful calibration tool at higher energy and may prove very useful in the Endcaps, although it will take longer. A significant rate of high-P T photons with very little background and an energy that can be known independently of the ECAL, is available in radiative decays of Z→µµ.These photons are being investigated as a valuable tool for various calibration related tasks, as well as a probe for measuring photon reconstruction efficiency.They can be used, for example, to inter-calibrate different regions of the ECAL (coefficient c i of Equation 2), and to tune the various cluster correction algorithms (coefficient F) and the overall energy scale (coefficient G).They can also be used to relate the energy scale of unconverted photons to that of electrons (from converted photons).For an integrated luminosity of only 1 fb -1 , an average of nearly 1 such photon per crystal will be collected. Summary The calibration of the CMS crystal calorimeter will be performed before and after the assembly of the detector.Before the assembly, crystals will be calibrated in the Testbeam, using laboratory measurements and with cosmic muons.After the assembly, crystals will be calibrated using physics events.At startup, the φ-uniformity inter-calibration technique will provide a precision of around 2% in a couple of hours.The design precision of 0.5% will be achieved using the E/P ratio of isolated electrons mainly from W and Z decays.The mass reconstruction of π 0 ,η→γγ and Z→µµγ will provide important additional calibration tools. FIGURE 1 . FIGURE 1.(a) A slice through a quadrant of the CMS Electromagnetic Calorimeter (b) the Tracker material distribution in front of the ECAL. FIGURE 2 . FIGURE 2. (a) Inter-calibration coefficients obtained in Testbeam versus those obtained from laboratory measurements (b) ratio between Testbeam and laboratory inter-calibration coefficients. FIGURE 3 . FIGURE 3.Inter-calibration precision achieved with the φ-uniformity method (a) in the ECAL Barrel and (b) in the ECAL Endcaps, obtained with 11 millions Level-1 jet trigger simulated events (circles).The expected limit on the inter-calibration precision is also shown (triangles). 1 FIGURE 4 . FIGURE 4. (a) Calibration precision versus η using isolated electrons (b) Calibration precision versus HLT events per crystal for different η Barrel regions.Upper curve: the last 10 crystals in the Barrel (1.305<η<1.479);middle curve: 10 crystals in the middle of the Barrel (0.783<η<0.957);lower curve: the first 15 crystals in the Barrel (0.0<η<0.261).The third point along each line gives the precision of 5 fb -1 of integrated luminosity.
4,204.2
2006-12-20T00:00:00.000
[ "Physics" ]
VIOLENT IN-CHAMBER LOADS IN AN OSCILLATING WATER COLUMN CAISSON In 2009, four of 16 chambers in the Mutriku breakwaterintegrated Oscillating Water Column (OWC) were badly damaged by storms, probably due to breaking wave loads, and slam within the chamber. To minimize exposure of future plant to such risks, it is necessary to characterise wave conditions under which such an installation could experience impact loads. This characterisation can be crucial to controling the powertake off resistance to increase the survability of the device during extreme weather. Large scale physical model tests in the Grosse Wellenkanal (GWK) included a video camera installed inside the chamber facing the rear chamber wall. Pressure sensors in the ceiling of the chamber were utilised to quantify the water loads. Inchamber impact pressures of up to 8 ρgH were recorded on the chamber ceiling, associated with the ‘sloshing’ observed. The “sloshing” phenomenon is not uncommon and should be considered in design processes. INTRODUCTION The idea of integrating a wave energy converter into a coastal defence or breakwater is not new.This will allow cost sharing between energy generation and harbour / coastal defence functions.Unfortunately, the Mutriku case (see e.g.Medina-Lopez, et al., 2015) demonstrated that design uncertainty and unpredictable weather may contribute to potential damage.There is extensive literature -based upon physical and numerical modelling -exploring loadings on the structure and within the chamber.Computational approaches however often assume the water column inside the chamber to behave simply -an assumption not supported by experimental visualisations of the water movement inside the chamber, e.g.Müller & Whittaker (1995).Those experiment showed that the water inside the chamber may behave violently under certain waves.Violent, impulsive pressures are however not easily quantified.The new experiements reported here quantify these internal impulsive loads for the first time at large scale. METHODOLOGY The experiments were done in the very large wave channel GWK in Hannover, Germany.The model OWC caisson was located 95 m from the wave maker.Power take off (PTO) was modelled using three different orifice diameters (0.1, 0.2 & 0.3 m).In addition, a closed orifice case was also tested.Two arrays of four wave gauges gave offshore and inshore wave conditions.Five further gauges measured in-chamber water levels, at the four corners and at the centre.Twelve pressure gauges were arranged on the front wall, rear wall, and in the chamber ceiling.Two cameras were deployed to get the qualitative image of the water movement: one inside the chamber facing the rear wall, and one outside the structure facing the front wall.A full description can be found in Viviano et al. (2016). RESULTS AND ANALYSIS A violent sloshing phenomenon is shown in the video sequence (Figure 1), for Tp = 5 s, Hm0 = 0.81m.In other tests, the water outside the OWC sometimes fell below the front curtain wall.When such "venting" occurred, the pressure inside the caisson equalised with the atmosphere through the gap resulting in loss of the negative pressure needed for the PTO. In order to characterise the condition under which a "sloshing" event could occur, the in-chamber video record was returned to.The sloshing characterisation depends on the wave height (H), characteristic chamber width over wave length (Bc/L), and opening:chamber area ratio (Ao/Ac).Colour code is utilised to indicate the level of sloshing intensity observed with no sloshing observed (green), low sloshing observed (blue), medium sloshing observed (yellow), and high sloshing observed (red).No sloshing means the water column surface looks calm while oscillating.Low sloshing means the water column surface is not calm, but the oscillation is still visible.Medium sloshing shows a very visible water height difference between the front and rear of the chamber pivoted the centre as shown by Figure 1 with average water level still oscillating.High sloshing indicate similar characteristics with the medium sloshing, but almost zero mean water oscillation.In addition to the colour code, several symbols are used to explain no test available (/), water level touch the ceiling (^), and major ceiling impact recorded (!).The results of the characterisation is plotted in Figure 3.It can be observed for the figure that the sloshing mainly occurs during high wave conditions.It can also be inferred that a low sloshing occurance during a closed / near closed chamber may lead to major sloshing in a fully open chamber.For the same Bc/L, higher wave height also almost always lead to high(er) sloshing.The wave height lower than 0.4 m seems to be less dangerous across different Bc/L (0.0697 -0.1394) except for the fully open chamber and Bc/L 0.1045.Figure 4 shows the same characterisation regime for irregular waves.The colour code and the symbols used in this figure have the same meaning as Figure 3.The irregular wave were generated with JONSWAP spectra with the wave length calculated based on the significant wave period.The sloshing characterisation here is basing on the significant wave height (Hm0).It seems that sloshing increases for every case in the irregular sea condition compared to regular wave conditions.The Bc/L = 0.1394, H(Hm0) = 0.26, and Ao/Ac = 0.88% case for example is blue for regular waves and yellow for irregular waves.Similar pattern can be observed for the rest of the case.This may happens because in the irregular wave condition maximum wave heights are about 1.8 times the significant wave height so the sloshing may be more likely during maximum waves.Low and medium sloshing conditions for a closed / near closed chamber may always lead to major sloshing in a bigger orifice opening for both regular and irregular wave condition.It can be concluded as well that the fully open chamber is more prone to sloshing compared to the closed / near closed chamber condition.A limitation of the physical model in these experiments was that the chamber width was fixed.In design point of view, the structure's peak resonance should be tuned to the frequency of the incoming waves for maximum energy absorption.The chamber width for this experiment is designed according to literature, e.g.Takahashi, S., 1989.One can imagine, however, the height difference between the front and the rear of the chamber water column might be due to the chamber width being much shorter or much longer than the wave and the wave is reflected by the chamber wall and creates in-chamber impacts on the ceiling. CONCLUSION In-chamber impacts have been observed with maximum pressures measured up to 8 ρgH.The sloshing regime of both regular and irregular wave condition have been characterised by means of in-chamber video records.Four different levels of sloshing have been characterised based on Bc/L, H(Hm0), and Ao/Ac settings.A physical model with changeable chamber width will be useful for the future work.Some amount of "sloshing" is not an uncommon situation and should be considered in design and performance assessment of an OWC chamber. Figure 3 Water column sloshing regime for the regular wave setting with colour code for no sloshing (green), low sloshing (blue), medium sloshing (yellow), and high sloshing (red) and symbols for sloshing impact (!), no test available (/), and water level reached the ceiling (^). Figure 1 . Figure 1.In-chamber camera images with t* representing location of the image relative to a single wave cycle.Water near the rear wall rises up and moves towards the front of the chamber quickly, as indicated by the dashed-line in Fig. 1 (a) before impacting the ceiling (b).Next comes a wave trough (c) and a further sudden rise of the water near the rear (d).The corresponding pressures are shown in Figure 2 with events of Figure 1 identified by arrows.This sequence of event results in a maximum pressure on the front wall of the ceiling followed by a maximum pressure on the rear wall of the ceiling. Figure 2 . Figure 2. Time series pressure measurement on the celing of Tp = 5s and Hm0 = 0.81m Figure 4 Figure 4 Water column sloshing regime for irregular wave setting with similar the colour code and symbols.
1,810.2
2018-12-30T00:00:00.000
[ "Environmental Science", "Engineering" ]
The Utilization of Triton X-100 for Enhanced Two-Dimensional Liquid-Phase Proteomics One of the main challenges in proteomics lies in obtaining a high level of reproducible fractionation of the protein samples. Automated two-dimensional liquid phase fractionation (PF2D) system manufactured by Beckman Coulter provides a process well suited for proteome studies. However, the protein recovery efficiency of such system is low when a protocol recommended by the manufacturer is used for metaproteome profiling of environmental sample. In search of an alternative method that can overcome existing limitations, this study replaced manufacturer's buffers with Triton X-100 during the PF2D evaluation of Escherichia coli K12. Three different Triton X-100 concentrations—0.1%, 0.15%, and 0.2%—were used for the first-dimension protein profiling. As the first-dimension result was at its best in the presence of 0.15% Triton X-100, second-dimension protein fractionation was performed using 0.15% Triton X-100 and the standard buffers. When 0.15% Triton X-100 was used, protein recovery increased as much as tenfold. The elution reliability of 0.15% Triton X-100 determined with ribonuclease A, insulin, α-lactalbumin, trypsin inhibitor, and cholecystokinin (CCK) affirmed Triton X-100 at 15% can outperform the standard buffers without having adverse effects on samples. This novel use of 0.15% Triton X-100 for PF2D can lead to greater research possibilities in the field of proteomics. Introduction The development of analytical tools for rapid analysis and identification of expressed protein profiles in cell, tissue or organism is currently an important area in biological research [1][2][3]. Although two-dimensional gel electrophoresis (2DE) is a classical technique that monitors and distinguishes multiple forms of proteins with differences in molecular mass or pI values, it can face difficulties with proteins of extreme mass (e.g., >200 kDa or <10 kDa) or pI values [4][5][6]. In addition, 2DE is not readily amenable to automation. Liquid-phase separation methods such as size-exclusion chromatography, affinity chromatography, and ion-exchange chromatography exhibited practical difficulties due to the lack of the isoelectric (pI) information and limited labeling efficiency [7][8][9]. Alternatively, the ProteomeLab PF2D platform (Beckman Coulter, USA) that can be used for the separation/fractionation, as well as quantitative comparisons of various biological and clinical samples, works in full automation combining chromatofocusing separation and hydrophobic fractionation [10]. During the first-dimension chromatofocusing of PF2D, proteins are separated by their pI and separated proteins with a pH gradient are collected using a fraction collector [11,12]. Subsequently, fractions collected from the first dimension are separated using reversed phase chromatography in the second dimension, which separates on the basis of hydrophobicity [12]. Separated fractions are monitored with UV detection to observe changes in 2 Journal of Biomedicine and Biotechnology the proteome [13][14][15]. Then, the selected peak can be identified by mass spectrometry. Although PF2D system offers high loading capacity and improved detection limit with lower abundance proteins [16,17], its protein recovery efficiency during the chromatofocusing step will be low when the standard protocol recommended by the manufacturer is used. Sheng et al. [18] reported that the inclusion of 20% isopropanol in the isoelectric focusing (IEF) buffer increased the number of proteins they could identify in the serum. They demonstrated improved recovery of protein, but purified BSA was used instead of complete serum. This buffer's ability to improve the recovery of all proteins thus remains unclear. The columns used with PF2D require the use of nonionic detergents such as Triton X-100 for the separation of proteins. Triton X-100 is a low-cost mixture of octylphenol ethoxylates, with an average of about 9-10 ethylene oxide units per molecule. In search of an alternative method that can increase the recovery of a wide range of proteins, this study modified the standard protocol using Triton X-100. Buffers recommended by Beckman Coulter's ProteomeLab PF2D protocol was replaced by Triton X-100 during protein profiling of Escherichia coli K12, in which its recovery efficiency was determined at various Triton X-100 concentrations. Sub-sequently, the elution accuracy of Triton X-100 at its optimized condition was confirmed by the control protein mixture of ribonuclease A, insulin, α-lactalbumin, trypsin inhi-bitor, and cholecystokinin (CCK). Liquid Chromatography. Before the chromatofocusing, cell extracts were exchanged to various start buffers using a PD-10 column (GE Healthcare Life Sciences, USA), and the first 3.5 mL fraction was collected. While the start buffer included in the ProteomeLab kit was designated as the "Start Buffer A," "Start Buffer B, C, and D" were prepared with the following: 0.1%, 0.15%, and 0.2% Triton X-100 in distilled water (EMD Chemicals, Inc., USA) at pH 8.4 for "B," "C," and "D," respectively; 6 M urea; 25 mM Bis-Tris; 1 M ammonium hydroxide. Protein concentration was estimated using Quant-iT Protein Assay Kits (Invitrogen, USA). All samples were diluted with each start buffer to obtain a final protein concentration of 1.5 mg/mL, and 2 mL of E. coli protein was injected into the chromatofocusing column. All protein samples were filtered through 0.2 μm PES Membrane filters (Millipore, USA). The chromatofocusing was performed using the ProteomeLab PF2D (Beckman-Coulter, USA) with an HPCF-1D column (250 mm × 2.1 mm, Eprogen, USA) that was loaded with each starting buffer (pH 8.5 ± 0.1) for 120 min. Each starting buffer was then equilibrated to the initial pH 8.5, and protein sample was loaded at a flow rate of 0.2 mL/min for 45 min. The protein sample elution was initiated with a linear gradient of various elution buffers (pH 4.0 ± 0.1) that took ∼60 min to complete. Eluent buffer included in the ProteomeLab kit was labeled as "Eluent Buffer A." "Eluent Buffer B, C, and D" were prepared with the following: 0.1%, 0.15%, and 0.2% Triton X-100 in distilled water at pH 4.0, respectively; iminodiacetic acid; 6 M urea; 10% v/v polybuffer 74 (GE Healthcare, USA). Proteins were eluted and collected by their isoelectric point (pI) in 4.0-8.5 range with 0.2 pH intervals into a 96-deep-plate well using the FC/I module. Remaining protein samples were finally eluted by washing the column with 1 M NaCl for 40 min. The column was then rinsed with 10 column volumes of distilled water before the next sample injection. The entire chromatofocusing step was operated at 20 • C with a flow rate of 0.2 mL/min, and elution profiles were monitored at 280 nm by Beckman 166 UV detector (Beckman Coulter, USA). And the second-dimension separation was performed using HPRP column (33 mm × 4.6 mm, 1.5 μm nonporous ODS-IIIE C18 silica beads, Eprogen, USA) at 50 • C with a flow rate of 0.75 mL/min. A 200 μL from the first chromato-focusing fraction was injected into the column and eluted with a 0-100% linear gradient of solvent A (0.1% w/v TFA in distilled water) and solvent B (0.08% w/v TFA in acetonitrile) for 35 min. At the end of second-dimension run, the column was equilibrated with an initial mobile phase for 10 column volumes. Proteins were detected by a Beckman 166 UV detector (Beckman Coulter, USA) at 214 nm. Protein profiles obtained using UV detection were analyzed by Pro-teoVue 2D (Beckman Coulter, USA). Determining Elution Reliability of 0.15% Triton X-100 during PF2D. To investigate the elution accuracy of 0.15% Triton X-100, the mixture of five proteins purchased from Beckman Coulter (USA) was injected into the HPCF-1D column that allows the elution of proteins in pH range of 4.0-8.5. The first-and second-dimension protein separations were achieved following the standard procedure described in Section 2.2 using Start Buffer C (pH 8.4) and Eluent Buffer C (pH 4.0). Protein profile data obtained using UV detection Results and Discussion 3.1. The Use of Triton X-100 for PF2D Chromatofocusing of E. coli. The eligibility and efficiency of Triton X-100 as an alternative to the buffers suggested for ProteomeLab (Beckman Coulter, California, USA) PF2D were evaluated with E. coli, using the standard ProteomeLab buffer as well as solutions prepared with 0.1%, 0.15%, and 0.2% Triton X-100. The first-dimension chromatofocusing separates proteins to differences in pI values, and the absorbance profiles created by each solution are shown in Figure 1. Proteins were eluted in the order of decreasing pI values, and all four solutions eluted its first peak during the first 20 min of sample loading period. As the HPCF-1D column used in this study provides limited elution efficiency with proteins whose pI values are in 8.5-4.0 range, the first peak corresponds to protein unbound to HPCF-1D column because of its pI value being greater than 8.5. The details of protein separation and pH gradient formation varied among four solutions. At first, E. coli chromatofocusing was performed using the buffer suggested for Beckman Coulter's ProteomeLab platform. As shown in Figure 1(a), first unbound protein was eluted during the first ∼20 min, and the pH gradient started forming at 50 min. The pH gradient which started forming at 50 min (from pH 8.1) lasted until 105 min (to pH 3.5), with a slight downward angle created at 92 min. During the pH gradient welldefined, multiple protein peaks were observed. Acidic proteins which remained in HPCF-1D column were eluted after 130 min as a result of washing the column with 1 M NaCl. Under given conditions, Beckman Coulter's standard buffer provided a well-defined protein chromatofocusing results in high resolution. Alternatively, solutions which included 0.1%, 0.15%, and 0.2% Triton X-100 were prepared and used in place of standard buffers to perform first-dimension protein profiling of E. coli. When the solution that included 0.1% Triton X-100 was used the first protein peak created in ∼20 min was inverted, giving a negative AU 280 reading (Figure 1(b)). This peak was followed by another inverted peak that was created during 48-58 min. When pH gradient formed during 50-130 min, proteins of low pI values (5.0 > pI) were eluted indistinctively. After 130 min, remnant protein in HPCF-1D column was eluted with column washing and created a large peak area. Subsequently, the concentration of Triton X-100 was increased to 0.15%. Figure 1(c) of 0.15% Triton X-100 showed significantly improved protein profiling results: an unbound protein was eluted at <20 min, small amounts of proteins were eluted during 20-60 min (pH 8.5-7.74) while the sample was being loaded, and a pH gradient was created during 60-110 min (pH 7.74-3.95). Unlike 0.1% Triton X-100, 0.15% Triton X-100 created a linear pH gradient, Journal of Biomedicine and Biotechnology an indication that the capacity is even throughout its pH range. Chromatofocusing that took place over the pH gradient is also sharp. As with other solutions, a large protein peak appeared at 160 min. Finally, 0.2% Triton X-100 was used to determine whether increasing the concentration of Triton X-100 would bring further improvements for chromatofocusing of E. coli. Interestingly, after the first unbound protein was eluted at <20 min, no significant protein elution was observed until the start of the pH gradient at 50 min ( Figure 1(d)). As the pH gradient for 0.2% Triton X-100 was created over the shortest time frame (during 50-80 min, pH 7.11-3.89), a high volume of proteins were eluted abruptly during pH gradient thus creating unreliable profiling data (Figure 1(d)). The performance of each buffer was judged upon its ability to achieve a well-defined pH gradient as well as an accurate pI-based protein separation at a given pH gradient range. Indistinct chromatofocusing results and negative AU 280 readings made 0.1% Triton X-100 inadequate for PF2D. While the performance of 0.15% Triton X-100 was comparable to that of standard buffers suggested for Pro-teomeLab PF2D, 0.2% Triton X-100 created a protein profile whose pI values were dubious as it happened in a shorter time frame. However, AU 280 readings of 0.15% and 0.2% Triton X-100 were as much as 9-10 times higher than those of Beckman Coulter's. Such high AU 280 readings obtained using 0.15% Triton X-100 were investigated further during the second-dimension separation that was performed on 16 a,b,c Refer to Figure 3(b). first-dimension protein fractions generated using the standard and 0.15% Triton X-100 solution. Second-dimension of PF2D fractionates in the order of increasing hydrophobicity. As shown in Figure 2(a), 16 protein fractions in pH 7.92-3.95 obtained using Beckman Coulter buffers were fractionized, and a thick protein band indicating a large quantity of proteins was seen around pH 5.01-4.70. The seconddimension result of 0.15% Triton X-100 that began from 16 first-dimension fractions (pH 7.74-3.95) showed distinctive bands for wider pI ranges. This novel methodology that utilizes 0.15% Triton X-100 enhances protein recovery efficiency by at least tenfold. Reliability Test Results of 0.15% Triton X-100. First-and second-dimension chromatography results shown in this study confirmed protein recovery can be increased in the presence of 15% Triton X-100 during PF2D analysis of E. coli, but its reliability with regard to accurate pI separation is yet to be judged without comparing the results with the standard proteins whose pI values are known. Once again, the elution profiles of five-protein test mixture (Ribonuclease A, insulin, α-lactalbumin, trypsin inhibitor, cholecystokinin (CCK)) were generated using 0.15% Triton X-100 (Figure 3(a)). The linear pH gradient was observed during 52-108 min, from pH 8.2-4.1. Multiple peaks were eluted in high resolution during its pH gradient. The accuracy of such protein elution was to be determined using the second-dimension protein chromatogram with a mixture of 5 proteins whose theoretical pI values are known (Figure 3(b)). The theoretical pI value of ribonuclease A is pI > 8.5 but the actual elution took over pI 8.1-8.0 (shown with the protein band E in Figure 3(b)), possibly due to a limited pH elution range set by the HPCF-1D column. The elution intervals of insulin, α-lac-talbumin, trypsin inhibitor, and CCK (shown by protein bands D, C, B, and A, resp.) were similar to their theoretical ranges. The detailed comparison of the experimental elution intervals of five control proteins with regard to theoretical values is summarized in Table 1. Conclusion Triton X-100 is a common nonionic surfactant, and the experimental results of this study affirmed that 0.15% Triton X-100 can be applied towards PF2D of a protein. Not only can 0.15% Triton X-100 greatly increase amount of protein recovery from chromatofocusing column, but it also enables PF2D analysis of protein with low pI. Combining the beneficial qualities mentioned thus far, 0.15% Triton X-100 for PF2D system can be exploited for further analyses of metaproteome originating from various sources.
3,283
2011-10-16T00:00:00.000
[ "Biology" ]
Design and Implementation of Interactive Flow Visualization Techniques The demand for flow visualization software stems from the popular (and growing) use of Computational Fluid Dynamics (CFD) and the increasing complexity of simulation data. CFD is popular with manufacturers as it reduces cost and the time of production relative to the expense involved in creating a real physical model. Modifications to a physical model to test new prototypes may be non-trivial and expensive. CFD solvers enable a high degree of software-based testing and refinement before creating a real physical model. Introduction The demand for flow visualization software stems from the popular (and growing) use of Computational Fluid Dynamics (CFD) and the increasing complexity of simulation data.CFD is popular with manufacturers as it reduces cost and the time of production relative to the expense involved in creating a real physical model.Modifications to a physical model to test new prototypes may be non-trivial and expensive.CFD solvers enable a high degree of software-based testing and refinement before creating a real physical model. The visualization of CFD data presents many different challenges.There is no single technique that is appropriate for the visualization of all CFD data.Some techniques are only suitable for certain scenarios and sometimes an engineer is only interesting in a sub-set of the data or specific features, such as vortices or separation surfaces.This means that an effective flow visualization application must offer a wide range of techniques to accommodate these requirements.The integration of a wide variety of techniques is non-trivial and care must be taken with the design and implementation of the software. We describe our flow visualization software framework that offers a rich set of state-of-the-art features.It is the product of over three years of development.The paper provides more details about the design and implementation of the system than are normally provided by typical research papers due to page limit constraints.Our application also serves as a basis for the implementation and evaluation of new algorithms.The application is easily extendable and provides a clean interface for the addition of new modules.More developers can utilize the code base in the future.A group development project greatly varies from an individual effort.To make this viable, strict coding standards [Laramee (2010)] and documentation are maintained.This will help to minimize the effort a future developer needs to invest to understand the codebase and expand upon it. Throughout this chapter we focus on the design and implementation of our system for flow visualization.We address how the systems design is used to address the challenges of visualization of CFD simulation data.We describe several key aspects of our design as well as the contributing factors that lead to these particular design decisions. The rest of this chapter is organized as follows: Section 2 introduces the reader to the field of flow visualization and provides information for further reading.Section 3 describes the user requirements and goals for our application.Section 4 provides an overview of the application design.A description of the major systems is then provided with the key classes and relationships are discussed.The chapter is concluded in Section 5. Throughout the chapter, class hierarchies and collaboration graphs are provided for various important classes of the system. Related work and background The visualization of velocity fields presents many challenges.Not the least of which is the notion of how to provide an intuitive representation of a 3D vector projected on to a 2D image plane.Other challenges include: • Occlusion in volumetric flow fields • Visualizing time-dependent flow data • Large, High-dimensional datasets -it is common place to see datasets on the Giga-and Tera-byte scale. • Uncertainty.Due to the numerical nature of CFD and flow visualization, error is accumulated at every stage.This needs to be minimized in order to provide accurate visualization results.This is by no means an exhaustive list but serves as a representation to give the reader a feel for the context of the system.Flow visualization algorithms can be classified into 4 sub-groups: direct, texture-based, geometric and feature-based.We now provide a description of each of these classes and highlight some of key techniques in each one. Direct flow visualization This category represents the most basic of visualization techniques.This range of techniques maps visualization primitives directly to the samples of the data.Examples of direct techniques are color-mapping of velocity magnitude or rendering arrow glyphs [Peng & Laramee (2009)]. Texture-based flow visualization This category provides a dense representation of the underlying velocity field, providing full domain coverage.This range of techniques depicts the direction of the velocity field by filtering a (noise) texture according the local velocity information.This results in the texture being smeared along the direction of the velocity.Line Integral Convolution (LIC) by Cabral and Leedom [Cabral & Leedom (1993)] one seminal texture-based technique.Other texture-based variants include Image-space advection (ISA) by Laramee et al. [Laramee et al. (2003)] and IBFVS [van Wijk (2003)], both of which use image-based approaches to apply texture-based techniques to velocity fields on the surfaces of CFD meshes.It should be noted that due to the dense representation of the velocity field, texture-based techniques are more suited to 2D flow fields and flow fields restricted to surfaces.Three-dimensional variants do exist [Weiskopf et al. (2001)] but occlusion becomes a serious problem and reduces effectiveness.We refer the interested reader to [Laramee et al. (2004)] for a thorough overview of texture-based techniques. Geometric flow visualization techniques This category involves the computation of geometry that reflects the properties of the underlying velocity field.The geometry used is generally curves, surfaces and volumes.The geometric primitives are constructed using numeric integration and using interpolation to reconstruct the velocity field between samples. Typically the geometry remains tangent to the velocity as in the case of streamlines and streamsurfaces [Hultquist (1992)] [ McLoughlin et al. (2009)].However non-tangential geometry also illustrate important features; streaklines and streaksurfaces [Krishnan et al. (2009)] are becoming increasingly popular.In fact this application framework was involved in the development of a novel streak surface algorithm [McLoughlin, Laramee & Zhang (2010)]. A thorough review of geometric techniques is beyond the scope of this paper and we refer the interested reader to a survey on the topic by McLoughlin et al. [McLoughlin, Laramee, Peikert, Post & Chen (2010)]. Feature-based flow visualization Feature-based techniques are employed to present a simplified sub-set of the velocity field rather than visualizing it in its entirety.Feature-based techniques generally focus on extracting and/or tracking characteristics such as vortices, or representing a vector field using a minimal amount of information using topological extraction as introduced by Helmann and Hesselink [Helman & Hesselink (1989)].Once again a thorough review of this literature is beyond the scope of the presented paper and we refer the interested to in-depth surveys on feature-based flow visualization by Post et al. and Laramee et al. [Laramee et al. (2007); Post et al. (2003)]. System requirements and goals Our application framework is used to implement existing advanced flow visualization techniques as well as being a platform for the development and testing of new algorithms. The framework is designed to be re-used by future developers researching flow visualization algorithms to increase efficiency and research output.Figure 1 shows a screenshot of the application in action. Support for a wide-variety of visualization methods and tools Our application is designed as a research platform.A variety of visualization methods have been implemented so that new algorithms can be directly compared with them.Therefore, the system is designed to be easily extensible.Some of the key flow visualization methods and features that are integrated into our application framework include the following: Interactivity Users generally require flexibility over the final visualization and favor feedback as quickly as possible after modifying visualization parameters.Our system is designed to enable a high level of interaction for the user.Providing such a level of interaction allows for easier exploration of the data.The user can also tailor the resulting visualization to their specific needs.This level of interactivity is also of use to the developer.Some algorithms are inherently dependent upon threshold values and parameters.Providing the functionality for these to be modified at run-time allows the programmer to test varying values without having to modify and recompile the code.Once the final value has been found it is then possible to remove the user-option and hard code as a constant if required. Support for large, high-dimensional, time-dependent simulations The application is used to visualize the results of large simulations comprised of many time-steps.Not every time step has to present in main memory simultaneously.Our application uses a streaming approach to handle large data sets.A separate data management thread continually runs in the background.When a time-step has been used this manager is responsible for unloading the data for a given time-step and loading in the data for the next (offline) time-step.A separate thread in used to minimize the interruption that occurs from the blocking I/O calls.If a single threaded solution was used the system would compute the visualization as far as possible with the in-core data and then have to halt until the new data is loaded.Note that in many cases the visualization computation still out performs the data loading in a multi-threaded solution, however, the delay may be greatly reduced. Simple API The system is intended for future developers to utilize.In order to achieve this the system must be composed of an intuitive, modular design maintaining a high level of re-usability.Extensive documentation and coding conventions [Laramee (2010)] are maintained to allow new users to be able to minimize the overhead required to learn the system.The system is documented using the doxygen documentation system [van Heesch (197-2004)], the documentation can be found online at http://cs.swan.ac.uk/∼cstony/documentation/. System design and implementation Figure 3 shows the design of our application.The major subsystems are shown along with the relationships of how they interact with one another. The Graphical User Interface subsystem is responsible for presenting the user with modifiable parameters and firing events in response to the users actions.The user interface is designed to be minimalistic.It is context sensitive and only the relevant controls are displayed to the user at any time.The GUI was created using the wxWidgets library [wxWidgets GUI Library (n.d.)].wxWidgets provides a cross-platform API with support for many common graphical widgets -greatly increasing the efficiency of GUI programming.The 3D Viewer is responsible for all rendering.It supports the rendering of several primitive types such as lines, triangles and quads.The 3D viewer is implemented using OpenGL [Architecture Review Board (2000)] for its platform independence.The Simulation Manager stores the simulation data.It stores vector quantities such as velocity and scalar quantities such as pressure.The simulation manager is also responsible for ensuring the correct time-steps are loaded for the desired time.The Visualization System is used to compute the visualization results.This system is comprised of several subsystems.Each major system of the application is now described in more detail.Fig. 3.An overview of our system design.This shows the major subsystems of the framework and which systems interact with one another. Visualization system design The visualization system is where the visualization algorithms are implemented.The application is designed to separate the visualization algorithm logic, the rendering logic, and the GUI.This allows part of the visualization system to be integrated into other applications -even if they use different rendering and GUI APIs.This system is comprised of four sub-systems. Geometric flow visualization subsystem Figure 4 illustrates the processing pipeline for the geometric flow visualization subsystem.Input and output data is shown using rectangles with rounded corners.Processes are shown in boxes. The geometric-based visualization subsystem uses the simulation data as its main input.After the user has set a range of integration parameters and specified the seeding conditions the initial seeding positions are created.Numerical integration is then performed to construct the geometry by tracing vertices through the vector field.This is an iterative process with which an optional refinement stage may be undertaken depending on the visualization method.For example, when using streamsurfaces, extra vertices need to be inserted into the mesh to ensure sufficient sampling of the vector field.Afterwards the object geometry is output.The penultimate stage takes the user-defined parameters that direct the rendering result.Most of the implemented algorithms in our application reside within this sub-system. Texture-based visualization subsystem The texture-based visualization process (Figure 5) also takes in the simulation data as input.An advection grid (used to warp the texture) is then set up and user-parameters are specified.An input noise-texture (Figure 5 inset) is then 'smeared' along the underlying velocity fielddepicting the tangent information.The texture advection is performed as an iterative process of integrating the noise texture coordinated through the vector field and accumulating the results after each integration.The resultant texture is then mapped onto a polygon to display the final visualization. Direct flow visualization subsystem The direct visualization sub-system presents the simplest algorithms.Typical techniques are direct color-coding and glyph plots.The left image of Figure 6 shows a basic glyph plot of a simulation of Hurricane Isabel.The right image includes a direct color-mapping of a saliency field showing local regions where a larger change in streamline geometry occurs. Feature-based flow visualization subsystem Feature-based algorithms may involve a lot of processing to analyze entire the simulation domain.There exists many types of feature that may be extracted (such as vortices), and each feature has a variety of algorithms to detect/extract them.In our application we implemented extraction of critical points (positions at which the velocity diminishes).The right image of Figure 6 shows a set of critical points extracted on a synthetic data set.A red highlight indicates a source or sink exists in the cell and a blue highlight indicates that a saddle point is present in the cell. Graphical user interface and asset management The perfect visualization tool does not (yet) exist.Each piece of research that has been undertaken over the past several decades focuses on a specific problem.Thus, a general solution that is suitable for all visualization problems has not been discovered -and may never be found.To this end, visualization applications must support a variety of techniques in order to be useful.When referring to a visualization tool/technique in terms of our software, we refer to them as assets.Our asset management system is designed with the following requirements: • A common interface for assets, simplifying the process of adding new assets in the future and ensuring the application is extendable.• A common interface between assets and the application GUI.Again this simplifies expansion in the future and ensures a basic level functionality is guaranteed to be implemented.This also provides a consistent user interface for the user.• Enforcing the re-use of existing code. • The same method of adding assets for the visualization at run-time. Fortunately the object-oriented programming paradigm and the C++ programming language provides us with a powerful set of tools to realize these requirements.The rest of this section discusses aspects of the GUI design and our framework for managing the visualization assets. Application tree and scene graph In order to provide a flexible system, that allows the user to interactively add and remove assets at run-time, we utilize a scene graph.A scene graph is a tree data structure in which all assets are represented by nodes within the tree.When a frame is rendered, a pre-order, depth-first traversal of the tree is carried out, starting from the root node.As each node is visited, it is sent to the rendering pipeline.Transformations applied to a node are passed onto it's children.We provide two node types: Asset Nodes and Camera Nodes.These are derived from a base node which provides a common interface and are not directly instantiable.The inheritance diagram for the node types is shown in Figure 7. The tree structure used for the scene graph lends itself to be represented using a GUI tree control (see Figure 8).The tree control directly depicts all of the nodes in the scene graph and the tree hierarchy.The user manipulates the scene graph through the tree control.Assets can be added to the scene graph by selecting a node to which an asset is attached.Right-clicking upon an asset presents a context menu with an option to add a new node into the scene graph (see Figure 9).Following this option another context menu is presented with a variety of assets which the user is able to add.When an asset is selected to be added, it is inserted into the scene graph as a child node of the currently selected node (the node which was right-clicked).Removal of a node is achieved using a similar method -the right-click context menu gives the option of removing a node.When a node is removed from the scene graph all of its children are also removed.This ensures that there are no dangling pointers and acquired resources are freed.The resource acquisition is initialization (RAII) [Meyers (2005)] programming idiom is obeyed throughout the application to ensure exception safe code and resources are deallocated. From a user perspective, this system allows a flexible method with which to interactively add and remove the visualization tools at run-time.The current tool set is always displayed to provide fast and easy access.From a developer perspective, this system provides a consistent 96 interface.The logic for adding and removing a node is maintained in the scene graph, application tree, and node classes.It does not need implementing on a per-asset basis.When a new visualization technique is implemented, all that is required is that the developer inherits from the asset node class and provides the implementation for the pure virtual functions described by the abstract asset node class (described in more detail in Section 4.2.3).In addition to the asset node, we provide a class called camera node which is responsible for storing and configuring the projection and viewpoint information.We now discuss the camera node and the asset node classes in more detail. Camera node 3D APIs such as OpenGL and DirectX have no concept of a camera.The viewpoint is always located at the position (0.0, 0.0, 0.0) in eye-space coordinates (for a thorough discussion of coordinate spaces and the OpenGL pipeline we refer the reader to [Woo et al. (2007))].However, the concept of a camera navigating through a 3D scene provides an intuitive description.We can give the appearance of a movable camera by moving the scene by the inverse of the desired camera transformation.For example, to simulate the effect that the camera is panning upwards, we simply move the entire scene downwards. As outlined in Section 4.2.1, all child nodes inherit the transformations of their parent.The camera node is set as the root node in the scene graph.The inverse transformation matrix is re-computed when the camera is manipulated.All other nodes are added as a descendant of the camera node and are, therefore, transformed by it's transformation matrix.Thus, the camera parameters are the main factor for setting the viewpoint and orientation.This is in line with the camera analogy described at the beginning of this section.Fig. 9.The tree control is used to add new nodes into the scene graph.The user selects which node they want add an asset to.A context menu then presents the user with a list of assets.When an asset is selected it is added to the scene graph as a child node of the currently selected node. This method can be extended to render to multiple viewports with different view points.This could be realized by maintaining a list of camera nodes, each maintaining their own set of view parameters (like using multiple cameras in real-life).For each view point, the relevant camera node can be inserted into the root node position.The scene graph is then traversed sending each node to the rendering pipeline.This allows the same scene graph to be used, the only change is the initial camera transform. Asset node Figure 10 shows the collaboration graph for the asset node class.This class is designed to provide a consistent interface for all visualization methods integrated into the application.It is an abstract class and therefore provides an interface that declares common functionality.The class provides three pure virtual function signatures: the initial/default material properties that are used by the OpenGL API.Finally the loadDefaultConfiguration() function loads the default set of parameters for the visualization method from file.The configuration files follow the INI file format [INI File (n.d.)].This function is provided to ensure that all visualization methods are loaded with sensible default values (where necessary).By providing the configuration information in a file and not hard-coding it into the application brings several benefits.A change to default parameters does not result in any re-compilation bringing speed benefits during development.It also means that the end user can change the default settings without having to have or understand the source code.It also allows users on different machines to each have their own set of default parameters tailored to their requirements.It would be a simple task to allow per-user configuration files on a single machine, however we have not implemented this functionality as it is superfluous to our requirements as a research platform. The asset node class also provides several member variables that are inherited: OpenGL assigns numeric ID's to all buffers.Asset nodes provide variables to store a vertex buffer (m_vboID), an index buffer (m_indexID) and a list of textures (m_texID).More than one texture can be assigned to an asset node in order to facilitate multi-texturing.Materials are settings that affect how geometry reflects to the light within OpenGL.A material is separated into several components: ambient, diffuse, specular and emissive.The asset node provides all renderable objects with a material property.It also provides a color property, this is used in a similar fashion to material but it much more lightweight with less flexibility.OpenGL is a state machine, where the current state affects how any primitives passed to it are rendered.Whether lighting and/or texturing is enabled are examples of some of the states used by OpenGL [Architecture Review Board (2000)].Every asset node has a state member variable which allows the node to stores various OpenGL state settings plus other non-OpenGL state parameters.The state class is described in more detail in Section 4.3.2. Asset user-interaction and the asset control pane User specified parameters for the various visualization assets are provided through the asset control pane.When an asset is selected in the application tree control (Section 4.2.1), the asset control pane is updated.The asset control pane shows only the controls for the currently selected asset.This helps reduce clutter in the GUI and provides an easier experience for the user.The asset control panel also populates the controls with the current values of the asset, therefore the GUI always represents the correct values for the selected asset.The asset panel can be seen in the blue box of Figure 1. The use of C++ pure virtual functions ensures that the GUI panels for all visualization assets must implement functionality to update itself according to the current state of the active asset it is controlling.The GUI panels are now discussed in more detail. Asset panels Figure 11 shows an examples of the asset panel at runtime.The left panel shows the controls displayed when a streamsurface asset is selected by the user.The right image shows the asset panel when the user has selected a different visualization asset, in this case a streamline set.Note how the streamsurface panel has now been removed and it replaced with the streamline set panel.Other relevant controls for the selected tool (such as state parameters) are neatly set in separate tabs.This has two benefits.It keeps the visualization tool parameters and the OpenGL rendering parameters for the tool separate.We can also re-use the same GUI panel for state controls as the parameters are common across all visualization methods. Asset panel controls are event-driven.When a control is modified an event is fired which is then handled by the visualization system.The event handler typically obtains an handle to the currently selected visualization asset and calls the correct function.The visualization system is then updated and feedback is presented to the user.Asset panels utilize multiple inheritance.While multiple inheritance has its shortcomings, i.e., the diamond problem, but can provide powerful solutions to problems if used with care.Figure 12 shows the inheritance diagram for a typical asset panel (in this case the streamsurface panel).Note that only a single level of inheritance is used.Throughout the design of this system, keeping the inheritance levels as low as possible was set out as a requirement.This ensures shall depth of inheritance trees (DIT) which makes the code easier to extend, test, and maintain.All asset panels inherit from two base classes.One of these classes is unique to all derived classes and the other one is common to all derived classes.The class CommonPanel, as its name implies, is inherited by all asset panels.It contains information such as the string that is displayed when the panel is shown in the asset control pane and enumeration of the panel type.It also provides the signature for a pure virtual function, U pdatePanel().This function is used to populate the panels controls with the correct values (by querying the currently selected asset).The second class panels inherit from are unique auto-generated classes that are output from using a GUI building tool called wxFormBuilder.The auto-generated classes provide panel layout and controls.They also provide interface for the events that are fired from that panel.The asset 100 Computer Graphics Fig. 11.Two examples of asset panels taken at runtime.The panel on the left shows the controls for streamsurfaces.When a streamsurface asset is selected in the application tree, this panel is inserted into the asset control pane.The right image shows the result of the user then selecting a streamline set asset.The streamsurface control panel is removed from the asset control pane and the streamline set control pane is inserted in its place.Only the relevant controls to the currently selected asset are displayed to the user.This leads to a less cluttered GUI and the user is not burdened with manually navigating the GUI to find the appropriate controls.Fig. 12. Inheritance diagram for asset panel types.This example shows the streamsurface panel.Asset panels use multiple inheritance, they inherit from CommonPanel and another class that is auto-generated using a GUI builder.Using this method provides fast creation of GUI controls (using the form builder and generated class) and allows us to provide a common interface and behavior for all panels (using the common panel class). panel then provides the implementation for the interface.In our system the auto-generated classes are prefixed with the letters "wx" to differentiate them from user created classes. The asset panels are designed with both developers and users in mind.The updating panel in the asset control pane ensures that only the relevant controls are displayed.The controls are also located in the same place within the application.Therefore, the user does not have to search around for various options.Similar to the node structures (Section 4.2.1), the panels are organized in a manner that facilitates easier implementation that ensures a certain level of functionality.The use of a GUI builder greatly facilitates the developer increasing the productivity when creating GUI components. 3D viewer This section details the 3D viewer system of our application.We discuss some key implementation details and outline how our application manages various rendering attributes such as materials and textures.The 3D viewer system is implemented using the OpenGL API.The OpenGL API was chosen because it provides a high level interface for utilizing graphics hardware and it is platform independent.The 3D viewer is responsible for providing the feedback from the visualization software.Recall that OpenGL defines a state machine whose current rendering state affects how the primitives that are passed through the graphics pipeline are rendered.State-machines can make debugging difficult; unexpected behavior may arise simply from a state being changed that the developer is unaware of.Querying the current state may be difficult at times and almost always relies on dumping text to a console window or file.To try and alleviate this issue our system implements a wrapper around for OpenGL state machine.Our OGL_Renderer (OpenGL Renderer) class provides flags for the OpenGL states used within our system.Other states may be added as they are added and utilized by the system.We also provide accessor and mutator functions for retrieving and manipulating state values.Our wrapper provides several benefits: • Breakpoints may be set to halt the program when a specific state value has been modified. • Bounds checking may be performed as states as a sanity check, making sure no invalid values are set. • When using an integrated development environment (IDE), the class can be queried easily and does not rely on the outputting of large volumes of text that the user has to manually search through. • Some OpenGL code can be simplified making development easier and more efficient. • Separating the graphics API code allows for other APIs to be used in the future if the requirement arises.This is very difficult if API specific code is embedded throughout the entire codebase. • It aids the developer by being able to focus more on the visualization algorithms rather than the rendering component.Thus, promoting the system as a research platform. Our system only requires a single rendering context (if multiple viewports are present, the same rendering context can be used).We utilize the Singleton design pattern [Gamma et al. (1994)] so that instantiation of the OGL_Renderer is restricted to a single instance.We note that a singleton has downsides as it is in essence a global variable.However, the OpenGL state machine is inherently global and the fact we only want a single rendering context makes a singleton suitable for our needs.In our case, a singleton provides a much cleaner solution than continually passing references around is much preferably than every object (that needs to) storing it's own reference to the renderer object.Access to the singleton is provided by using the following C++ public static function: s t a t i c OGL_Renderer& OGL_Renderer : : I n s t a n c e ( ) { s t a t i c OGL_Renderer i n s t a n c e ; return i n s t a n c e ; } The first time that this function is called, an instance of the OGL_Renderer is created and the reference to it is returned.Future calls to this function do not create a new instance (due to the static variable) and a reference to the current instance is returned. Rendering OpenGL rendering code can be ugly and cumbersome if not carefully designed.The API uses C-style syntax which does not necessarily interleave itself well with C++ code in terms of code readability.Many calls are usually made to set the OpenGL state before sending the geometry data along the rendering pipeline.Here is an example of OpenGL code that renders a set of vertices that are already stored in a vertex buffer on the GPU.If the rendering code was merged into the visualization code, all renderable objects would possess similar code chunks.This (1) makes the code harder to read and (2) produces a lot of repetitive code throughout the codebase. Our system segregates this type of rendering code.We provide classes such as TriangleRenderer and LineRenderer which contain utility functions that simplify the rendering process.A typical usage of the triangle renderer is shown below. . . .TriangleRenderer : : RenderTriangle_VBO ( m_vboID , m_indexID , m_numberOfIndices , TRIANGLE_STRIP ) ; . . .This call to the RenderTriangle_VBO function passes in the required buffers, the number of vertices to be rendered and the rendering mode.This approach allows the developer to take advantage of code re-use and makes the code much more readable. State objects We provide a State class that encapsulates various OpenGL states that are utilized by our visualization assets.The state class has the following members: • (bool) m_lighting; • (bool) m_texturing; • (bool) m_blend; • (uint) m_program; • (int) m_stateBlendSrc; • (int) m_stateBlendDst; • (bool) m_render; The first three bool members are flags indicating whether the matching OpenGL state will be enabled.The m_program member is the ID of the shader program that is used to render the asset.The blend members store the blending states when blending is enabled.The final member, m_render, indicates whether asset is rendered or ignored.This member has no counterpart in the OpenGL state machine.It is included to allow the user to disable the rendering an asset without removing it from the scene graph.The state class has a member function, SetState(), which is called immediately before the asset is rendered.Note we omit a through discussion of how OpenGL approximates lighting and materials.Instead we refer the interested reader to [Woo et al. (2007)]. Textures, texture management and texture editing As we have previously discussed, our application has served as a research platform for flow visualization techniques.More specifically we have focused on a sub-set of flow visualization techniques that fall into the geometry-based category.These methods compute a geometry that represents some behavior of a flow field.However, by color-mapping this geometry we can depict more information about the flow behavior than the geometry alone.For example, velocity magnitude is often mapped to color. Color-mapping can be achieved in a variety of ways.A function may be provided that maps the color, although for complex mappings defining a suitable function may be difficult.A large lookup table may be produced, this is a flexible solution but can lead to the developer producing lots of code to produce large look up tables. Our approach to color-mapping utilizes texture-mapping.Here the texture itself is the lookup table and all we have to do is provide the texture coordinate to retrieve the desired value from the texture.Textures are a very powerful tool in computer graphics and rendering APIs readily provide functionality for various interpolation schemes which we can utilize.They are also fast as they due to their hardware support.This system is also very flexible, new color-maps (in the form of images) can be dropped into the textures folder of the application and they will automatically be loaded the next time the application is run.Management of the texture is equally simple, the texture manager simply maintains a list of textures, the user can select the texture they wish to use from the GUI and the texture manager binds that texture to the OpenGL state.We also provide a tool that allows the user to create their own color maps.This allows the user to customize the color-mapping at run-time to ensure that the mapping adequately represents the information they wish to present.Figure 13 shows some steps of an interactive session with the editor.The editor allows the user to insert (and remove) samples along the color map.The color of the samples can be altered and the position of the sample can be updated by dragging it around in the editor window.The colors values are interpolated between samples.An up-to-date preview of the color map is always displayed within the editor. Simulation manager The final major system in our application is the simulation manager.The simulation manager is responsible for loading the simulation data and managing the sub-sets of simulation data when it won't fit in core memory.The simulation provides a set of classes for 2D and 3D simulations.The simulation manager handles both discretely sampled data, such as the output from CFD simulations, and analytically defined data by providing the necessary parameters to a function to compute the vector information.Flow simulations are output in a variety of file formats using both ASCII and binary file output.Our application supports a range of formats and provides a simple interface for developers to add support for more formats in future. The simulation manager is used whenever a vector field evaluation is requested by one of the visualization assets.It is responsible for determining whether a given position lies within the domain (both spatially and temporally).If the position is determined as valid, the simulation manager populates a cell object (of the corresponding grid type) with the appropriate vector values.The cell objects also belong to the simulation manager and are used to construct the final vector value at the desired position using interpolation. Large-time dependent simulation data As previously discussed, the output from time-dependent CFD simulations can be of the order of gigabytes or even terrabytes.Thus, we have to consider out-of-core methods.Our application handles such large amounts of data by only loading a sub-set of the simulation into memory.In order to perform a single integration step, only two time-steps need to be copied to main memory.For example if each our simulation output data for every second and we need to advect a particle at t = 3.5s, only time-steps 3 and 4 four are needed to interpolate the required vector values. We employ a method similar to Bürger et al. [Bürger et al. (2007)].We allocate a number of slots equal to the number of time-steps that fit into main memory.These slots are then populated with the data from a single time-step, starting with the first time-step and proceeding consecutively.For example, if we can fit 6 time-steps into memory we allocate 6 slots and populate them with time-steps 0-5.When we have passed through a time-step it's data is unloaded and the next time-step is loaded from file in it's place.For example, if we are constructing a pathline, when t ≥ 1, the first slot (which holds the data for timestep 0) is overwritten with the data for timestep 6 -the next unloaded time-step in the simulation.Figure 14 illustrates an example. Conceptually, a sliding window run over the slots, with the pair of slot covered by the window being used for the current vector field evaluations.When the sliding window has passed a slot, the slot is updated with the next unloaded time-step.When the sliding window reaches the last slot it wraps around to the first slot and the cycle is repeated.The sliding window transition is triggered when a time greater than the current time period covered by the window is requested by the application. For this method to be effective the simulation manager runs in a separate thread.Disk transfer operations are blocking calls and they halt the rest of the application if a single thread is used.Moving these blocking calls to a separate thread allows the application to proceed computing visualization results while data is loaded in the background.Note, there may be times where the visualization results are computed faster than the simulation manager can load the data. If the required time-steps are not present in memory the application has no option but to halt until they have been loaded.However, even in this case the multi-threaded simulation manager reduces the number and duration of halts compared to a single-threaded solution. Another consideration that needs to be considered is how the visualization assets are constructed.If we were to generate 10 pathlines by computing the first pathline and then the second one and so on, the simulation manager would have to load all time-steps 10 times (one for each pathline).It is much more efficient to construct all pathlines simultaneously by iterating over them and computing successive points.This ensures that they all require the same sliding window position in the simulation slots and prevents unnecessary paging of data. Conclusion In a typical research paper many implementation details have to be omitted due to space restraints.It is rare to see literature that provides an in-depth discussion concerning the implementation of an entire visualization application.This chapter serves to provide such a discussion.The chapter provides an overview of the high-level application structure and provides details of key systems and classes and the reasoning why these were designed in this way.Many topics are covered ranging from multi-threaded data management for performance gains to GUI design and implementation with considerations both the developer and the user. We demonstrate that using a good software engineering practices and design methodologies provide an enhanced experience for both software developers and end-users of the software. Fig. 1 . Fig. 1.A screenshot of the application showing the GUI providing controls for streamsurfaces computed on a simulation of Rayleigh-Bénard convection.The application window is split up into three distinct regions.1.The Application Tree (highlighted by the red box) is used to manage the assets in the scene.2. The Rendering Window (highlighted by the green box) displays the visualization results and allows the user to interactively modify the viewing position and orientation.3. The Asset Control Pane (highlighted by the blue box) displaysthe current set of controls for the selected asset.The GUI is context sensitive for the benefit of the user and the asset control pane only displays the controls for a single tool at any given time.Should a different visualization tool be selected a new set of controls are displayed and the unrequired ones removed.This approach is adopted to provide a simple, uncluttered interface, allowing the user to focus only on the necessary parameters/controls. Fig. 4 . Fig. 4. (Left) The processing pipeline for the geometric flow visualization subsystem.(Right) A set of streamlines generated by the geometric flow visualization subsystem.The streamlines are rendered as tube structures to enhance depth perception and provide a more aesthetically appealing result.The visualization depicts interesting vortical behavior in a simulation of Arnold-Beltrami-Childress flow [Haller (2005)]. Fig. 5 . Fig. 5. (Left) The processing pipeline for the texture-based visualization subsystem.(Right) A Line Integral Convolution (LIC) visualization using the texture-based visualization system.This image was generated by 'smearing' the noise texture (inset) along the direction of the underlying vector field at each pixel.The visualization is of a simulation of Hurricane Isabel.The eye of the hurricane can be seen towards the top of the image. Fig. 6 . Fig. 6.Direct and feature-based visualizations.The left image shows a basic glyph plot of the velocity field of a simulation of Hurricane Isabel.The right image shows the critical points extracted on a synthetic data set.The cells that contain the critical points are highlighted.A red highlight indicates the critical point is a source or a sink and a blue highlight indicates a saddle point.This visualization also contains a direct color-mapping of a saliency field based on local changes in streamline geometry. Fig. 7 . Fig. 7. Inheritance diagram for the node classes.The asset node is the interface from which all integrated visualization techniques inherit and implement. Fig. 8 . Fig. 8. Screenshots of the application tree during the run-time of different sessions.The application tree is a GUI tree control that represents the nodes in the scene graph.(Left) Several visualization assets, such as streamline sets and slice probes, are currently being employed.(Right) The user is editing the label of one of the assets. • Fig. 10.Collaboration diagram for the asset node class.The boxes represent classes in our framework. i n d B u f f e r (GL_ARRAY_BUFFER, v e r t e x B u f f e r I D ) ; g l E n a b l e C l i e n t S t a t e (GL_VERTEX_ARRAY ) ; g l V e r t e x P o i n t e r ( 3 , GL_FLOAT, s i z e o f ( Vector3 < f l o a t > ) , NULL ) ; g l B i n d B u f f e r (GL_ARRAY_BUFFER, normalBufferID ) ; g l E n a b l e C l i e n t S t a t e (GL_NORMAL_ARRAY ) ; glNormalPointer (GL_FLOAT, s i z e o f ( Vector3 < f l o a t > ) , NULL ) ; g l B i n d B u f f e r (GL_ARRAY_BUFFER, t e x t u r e B u f f e r I D ) ; g l E n a b l e C l i e n t S t a t e (GL_TEXTURE_COORD_ARRAY ) ; glTexCoordPointer ( 1 , GL_FLOAT, s i z e o f ( f l o a t ) , NULL ) ; g l B i n d B u f f e r (GL_ELEMENT_ARRAY_BUFFER, i n d e x I d ) ; glDrawElements ( GL_TRIANGLE_STRIP , numVerts , GL_UNSIGNED_INT , NULL ) ; g l D i s a b l e C l i e n t S t a t e (GL_TEXTURE_COORD_ARRAY ) ; g l D i s a b l e C l i e n t S t a t e (GL_NORMAL_ARRAY ) ; g l D i s a b l e C l i e n t S t a t e (GL_VERTEX_ARRAY ) ; g l B i n d B u f f e r (GL_ARRAY_BUFFER, NULL ) ; g l B i n d B u f f e r (GL_ELEMENT_ARRAY_BUFFER, NULL ) ; . . .This renders an indexed set of vertices as a strip of triangles with shading and texturing information.First the buffers and pointers into them are set as well.The vertices are then passed down to the rendering pipeline.The state changes are undone after rendering to put the OpenGL back into its original state.It is clear this is not the simplest code to work with. void S t a t e : : S e t S t a t e ( ) { OGL_Renderer& r e n d e r e r = OGL_Renderer : : I n s t a n c e ( ) ; i f ( m_lighting ) renderer .Enable (LIGHTING ) ; i f ( m_texturing ) renderer .Enable (TEXTURING ) ; Fig. 13 . Fig. 13.Some images from an interactive session with the color map editor.The top-left image shows the initial state of the editor.The top right image shows the result when the user inserts a new sample (black) in the center of the color map.The bottom-left image shows the result after the user has updated the color of the middle sample to yellow.Finally the bottom-right image shows the effect that dragging the middle sample to right has.The color values between each sample are constructed using interpolation. Fig. 14 . Fig.14.These four tables show the time-steps that are loaded into the simulation manager slots for given time periods.The grey cells show the time-steps that are used to perform any vector field evaluations for the stated time period.(a) Shows the first time period (0 ≤ t < 1).(b) shows the next time period, the two slots used in the vector field evaluation have moved over -a sliding window.The previous slot has been updated with the subsequent unloaded time-step in the simulation (slot 0 is loaded with time-step 6).(c) The slots wrap around, when the sliding window reaches the last slot it switched back to the first slot.(d) The process repeats with the new time-steps in the slots.
10,749.2
2012-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]